name
stringlengths 1
3
| title
stringlengths 17
118
| abstract
stringlengths 268
2.12k
| fulltext
stringlengths 8.6k
78.1k
| keywords
stringlengths 28
1.35k
|
---|---|---|---|---|
1 | 2-Source Dispersers for Sub-Polynomial Entropy and Ramsey Graphs Beating the Frankl-Wilson Construction | The main result of this paper is an explicit disperser for two independent sources on n bits, each of entropy k = n o(1). Put differently, setting N = 2n and K = 2k , we construct explicit N N Boolean matrices for which no K K sub-matrix is monochromatic. Viewed as adjacency matrices of bipartite graphs, this gives an explicit construction of K-Ramsey bipartite graphs of size N . This greatly improves the previous bound of k = o(n) of Barak, Kindler, Shaltiel, Sudakov and Wigderson [4]. It also significantly improves the 25-year record of k = ~ O(n) on the special case of Ramsey graphs, due to Frankl and Wilson [9]. The construction uses (besides "classical" extractor ideas) almost all of the machinery developed in the last couple of years for extraction from independent sources, including: Bourgain's extractor for 2 independent sources of some entropy rate < 1/2 [5] Raz's extractor for 2 independent sources, one of which has any entropy rate > 1/2 [18] Rao's extractor for 2 independent block-sources of entropy n (1) [17] The "Challenge-Response" mechanism for detecting "entropy concentration" of [4]. The main novelty comes in a bootstrap procedure which allows the Challenge-Response mechanism of [4] to be used with sources of less and less entropy, using recursive calls to itself. Subtleties arise since the success of this mechanism depends on restricting the given sources, and so recursion constantly changes the original sources. These are resolved via a new construct, in between a disperser and an extractor, which behaves like an extractor on sufficiently large subsources of the given ones. This version is only an extended abstract, please see the full version, available on the authors' homepages, for more details. | INTRODUCTION
This paper deals with randomness extraction from weak
random sources. Here a weak random source is a distribution
which contains some entropy. The extraction task is to
design efficient algorithms (called extractors) to convert this
entropy into useful form, namely a sequence of independent
unbiased bits. Beyond the obvious motivations (potential
use of physical sources in pseudorandom generators and in
derandomization), extractors have found applications in a
variety of areas in theoretical computer science where randomness
does not seem an issue, such as in efficient constructions
of communication networks [24, 7], error correcting
codes [22, 12], data structures [14] and more.
Most work in this subject over the last 20 years has focused
on what is now called seeded extraction, in which the
extractor is given as input not only the (sample from the)
defective random source, but also a few truly random bits
(called the seed). A comprehensive survey of much of this
body of work is [21].
Another direction, which has been mostly dormant till
about two years ago, is (seedless, deterministic) extraction
from a few independent weak sources. This kind of extraction
is important in several applications where it is unrealis-tic
to have a short random seed or deterministically enumerate
over its possible values. However, it is easily shown to be
impossible when only one weak source is available. When at
least 2 independent sources are available extraction becomes
possible in principle. The 2-source case is the one we will
focus on in this work.
The rest of the introduction is structured as follows. We'll
start by describing our main result in the context of Ramsey
graphs. We then move to the context of extractors and disperser
, describing the relevant background and stating our
result in this language. Then we give an overview of the
construction of our dispersers, describing the main building
blocks we construct along the way. As the construction is
quite complex and its analysis quite subtle, in this proceedings
version we try to abstract away many of the technical
difficulties so that the main ideas, structure and tools used
are highlighted. For that reason we also often state definitions
and theorems somewhat informally.
1.1
Ramsey Graphs
Definition 1.1. A graph on N vertices is called a K-Ramsey
Graph if it contains no clique or independent set of
size K.
In 1947 Erd
os published his paper inaugurating the Prob-abilistic
Method with a few examples, including a proof that
most graphs on N = 2
n
vertices are 2n-Ramsey. The quest
for constructing such graphs explicitly has existed ever since
and lead to some beautiful mathematics.
The best record to date was obtained in 1981 by Frankl
and Wilson [9], who used intersection theorems for set systems
to construct N -vertex graphs which are 2
n log n
-Ramsey.
This bound was matched by Alon [1] using the Polynomial
Method, by Grolmusz [11] using low rank matrices over rings,
and also by Barak [2] boosting Abbot's method with almost
k-wise independent random variables (a construction that
was independently discovered by others as well). Remark-ably
all of these different approaches got stuck at essentially
the same bound. In recent work, Gopalan [10] showed that
other than the last construction, all of these can be viewed
as coming from low-degree symmetric representations of the
OR function. He also shows that any such symmetric representation
cannot be used to give a better Ramsey graph,
which gives a good indication of why these constructions
had similar performance. Indeed, as we will discuss in a
later section, the n entropy bound initially looked like a
natural obstacle even for our techniques, though eventually
we were able to surpass it.
The analogous question for bipartite graphs seemed much
harder.
Definition 1.2. A bipartite graph on two sets of N vertices
is a K-Ramsey Bipartite Graph if it has no K K
complete or empty bipartite subgraph.
While Erd
os' result on the abundance of 2n-Ramsey graphs
holds as is for bipartite graphs, until recently the best explicit
construction of bipartite Ramsey graphs was 2
n/2
Ramsey
, using the Hadamard matrix. This was improved
last year, first to o(2
n/2
) by Pudlak and R
odl [16] and then
to 2
o(n)
by Barak, Kindler, Shaltiel, Sudakov and Wigderson
[4].
It is convenient to view such graphs as functions f :
({0, 1}
n
)
2
{0, 1}. This then gives exactly the definition
of a disperser.
Definition 1.3. A function f : ({0, 1}
n
)
2
{0, 1} is
called a 2-source disperser for entropy k if for any two sets
X, Y {0, 1}
n
with |X| = |Y | = 2
k
, we have that the image
f (X, Y ) is {0, 1}.
This allows for a more formal definition of explicitness: we
simply demand that the function f is computable in polynomial
time. Most of the constructions mentioned above are
explicit in this sense.
1
Our main result (stated informally) significantly improves
the bounds in both the bipartite and non-bipartite settings:
Theorem 1.4. For every N we construct polynomial time
computable bipartite graphs which are 2
n
o
(1)
-Ramsey. A standard
transformation of these graphs also yields polynomial
time computable ordinary Ramsey Graphs with the same parameters
.
1.2
Extractors and Dispersers from independent
sources
Now we give a brief review of past relevant work (with the
goal of putting this paper in proper context) and describe
some of the tools from these past works that we will use.
We start with the basic definitions of k-sources by Nisan
and Zuckerman [15] and of extractors and dispersers for independent
sources by Santha and Vazirani [20].
Definition 1.5
([15], see also [8]). The min-entropy
of a distribution X is the maximum k such that for every
element x in its support, Pr[X = x] 2
-k
. If X is a distribution
on strings with min-entropy at least k, we will call
X a k-source
2
.
To simplify the presentation, in this version of the paper
we will assume that we are working with entropy as opposed
to min-entropy.
Definition 1.6
([20]). A function f : ({0, 1}
n
)
c
{0, 1}
m
is a c-source (k, ) extractor if for every family of c
independent k-sources X
1
,
, X
c
, the output f (X
1
,
, X
c
)
1
The Abbot's product based Ramsey-graph construction of
[3] and the bipartite Ramsey construction of [16] only satisfy
a weaker notion of explicitness.
2
It is no loss of generality to imagine that X is uniformly
distributed over some (unknown) set of size 2
k
.
672
is a -close
3
to uniformly distributed on m bits. f is a disperser
for the same parameters if the output is simply required
to have a support of relative size (1 - ).
To simplify the presentation, in this version of the paper,
we will assume that = 0 for all of our constructions.
In this language, Erd
os' theorem says that most functions
f : ({0, 1}
n
)
2
{0,1} are dispersers for entropy 1 + log n
(treating f as the characteristic function for the set of edges
of the graph). The proof easily extends to show that indeed
most such functions are in fact extractors. This naturally
challenges us to find explicit functions f that are 2-source
extractors.
Until one year ago, essentially the only known explicit
construction was the Hadamard extractor Had defined by
Had
(x, y) = x, y ( mod 2). It is an extractor for entropy
k > n/2 as observed by Chor and Goldreich [8] and can
be extended to give m = (n) output bits as observed by
Vazirani [23]. Over 20 years later, a recent breakthrough
of Bourgain [5] broke this "1/2 barrier" and can handle 2
sources of entropy .4999n, again with linear output length
m = (n). This seemingly minor improvement will be crucial
for our work!
Theorem 1.7
([5]). There is a polynomial time computable
2-source extractor f : ({0, 1}
n
)
2
{0, 1}
m
for entropy
.4999n and m = (n).
No better bounds are known for 2-source extractors. Now
we turn our attention to 2-source dispersers. It turned out
that progress for building good 2-source dispersers came via
progress on extractors for more than 2 sources, all happening
in fast pace in the last 2 years. The seminal paper of Bourgain
, Katz and Tao [6] proved the so-called "sum-product
theorem" in prime fields, a result in arithmetic combinatorics
. This result has already found applications in diverse
areas of mathematics, including analysis, number theory,
group theory and ... extractor theory. Their work implic-itly
contained dispersers for c = O(log(n/k)) independent
sources of entropy k (with output m = (k)). The use of
the "sum-product" theorem was then extended by Barak et
al. [3] to give extractors with similar parameters. Note that
for linear entropy k = (n), the number of sources needed
for extraction c is a constant!
Relaxing the independence assumptions via the idea of
repeated condensing, allowed the reduction of the number
of independent sources to c = 3, for extraction from sources
of any linear entropy k = (n), by Barak et al. [4] and
independently by Raz [18].
For 2 sources Barak et al. [4] were able to construct dispersers
for sources of entropy o(n). To do this, they first
showed that if the sources have extra structure (block-source
structure, defined below), even extraction is possible from 2
sources. The notion of block-sources, capturing "semi inde-pendence"
of parts of the source, was introduced by Chor
and Goldreich [8]. It has been fundamental in the development
of seeded extractors and as we shall see, is essential
for us as well.
Definition 1.8
([8]). A distribution X = X
1
, . . . , X
c
is a c-block-source of (block) entropy k if every block X
i
has entropy k even conditioned on fixing the previous blocks
X
1
,
, X
i-1
to arbitrary constants.
3
The error is usually measured in terms of
1
distance or
variation distance.
This definition allowed Barak et al. [4] to show that their
extractor for 4 independent sources, actually performs as
well with only 2 independent sources, as long as both are
2-block-sources.
Theorem 1.9
([4]). There exists a polynomial time computable
extractor f : ({0, 1}
n
)
2
{0, 1} for 2 independent
2-block-sources with entropy o(n).
There is no reason to assume that the given sources are
block-sources, but it is natural to try and reduce to this
case. This approach has been one of the most successful in
the extractor literature. Namely try to partition a source
X into two blocks X = X
1
, X
2
such that X
1
, X
2
form a
2-block-source. Barak et al. introduced a new technique to
do this reduction called the Challenge-Response mechanism,
which is crucial for this paper. This method gives a way to
"find" how entropy is distributed in a source X, guiding the
choice of such a partition. This method succeeds only with
small probability, dashing the hope for an extractor, but still
yielding a disperser.
Theorem 1.10
([4]). There exists a polynomial time
computable 2-source disperser f : ({0, 1}
n
)
2
{0, 1} for
entropy o(n).
Reducing the entropy requirement of the above 2-source
disperser, which is what we achieve in this paper, again
needed progress on achieving a similar reduction for extractors
with more independent sources. A few months ago Rao
[?] was able to significantly improve all the above results
for c 3 sources. Interestingly, his techniques do not use
arithmetic combinatorics, which seemed essential to all the
papers above. He improves the results of Barak et al. [3] to
give c = O((log n)/(log k))-source extractors for entropy k.
Note that now the number c of sources needed for extraction
is constant, even when the entropy is as low as n
for any
constant !
Again, when the input sources are block-sources with sufficiently
many blocks, Rao proves that 2 independent sources
suffice (though this result does rely on arithmetic combinatorics
, in particular, on Bourgain's extractor).
Theorem 1.11
([?]). There is a polynomial time computable
extractor f : ({0, 1}
n
)
2
{0, 1}
m
for 2 independent
c-block-sources with block entropy k and m = (k), as long
as c = O((log n)/(log k)).
In this paper (see Theorem 2.7 below) we improve this
result to hold even when only one of the 2 sources is a c-block
-source. The other source can be an arbitrary source
with sufficient entropy. This is a central building block in
our construction. This extractor, like Rao's above, critically
uses Bourgain's extractor mentioned above. In addition it
uses a theorem of Raz [18] allowing seeded extractors to have
"weak" seeds, namely instead of being completely random
they work as long as the seed has entropy rate > 1/2.
MAIN NOTIONS AND NEW RESULTS
The main result of this paper is a polynomial time computable
disperser for 2 sources of entropy n
o(1)
, significantly
improving both the results of Barak et al. [4] (o(n) entropy).
It also improves on Frankl and Wilson [9], who only built
Ramsey Graphs and only for entropy ~
O(n).
673
Theorem 2.1
(Main theorem, restated). There exists
a polynomial time computable 2-source disperser D :
({0, 1}
n
)
2
{0, 1} for entropy n
o(1)
.
The construction of this disperser will involve the construction
of an object which in some sense is stronger and
in another weaker than a disperser: a subsource somewhere
extractor. We first define a related object: a somewhere extractor
, which is a function producing several outputs, one of
which must be uniform. Again we will ignore many technical
issues such as error, min-entropy vs. entropy and more, in
definitions and results, which are deferred to the full version
of this paper.
Definition 2.2. A function f : ({0, 1}
n
)
2
({0, 1}
m
)
is a 2-source somewhere extractor with outputs, for entropy
k, if for every 2 independent k-sources X, Y there exists an
i [] such the ith output f(X,Y )
i
is a uniformly distributed
string of m bits.
Here is a simple construction of such a somewhere extractor
with as large as poly(n) (and the p in its name will
stress the fact that indeed the number of outputs is that
large). It will nevertheless be useful to us (though its description
in the next sentence may be safely skipped). Define
pSE
(x, y)
i
= V(E(x, i), E(y, i)) where E is a "strong" logarithmic
seed extractor, and V is the Hadamard/Vazirani 2-source
extractor. Using this construction, it is easy to see
that:
Proposition 2.3. For every n, k there is a polynomial
time computable somewhere extractor pSE : ({0, 1}
n
)
2
({0, 1}
m
)
with = poly(n) outputs, for entropy k, and m =
(k).
Before we define subsource somewhere extractor, we must
first define a subsource.
Definition 2.4
(Subsources). Given random variables
Z and ^
Z on {0, 1}
n
we say that ^
Z is a deficiency d subsource
of Z and write ^
Z Z if there exists a set A {0,1}
n
such
that (Z|Z A) = ^Z and Pr[Z A] 2
-d
.
A subsource somewhere extractor guarantees the "some-where
extractor" property only on subsources X
, Y
of the
original input distributions X, Y (respectively). It will be
extremely important for us to make these subsources as large
as possible (i.e. we have to lose as little entropy as possible).
Controlling these entropy deficiencies is a major technical
complication we have to deal with. However we will be informal
with it here, mentioning it only qualitatively when
needed. We discuss this issue a little more in Section 6.
Definition 2.5. A function f : ({0, 1}
n
)
2
({0, 1}
m
)
is a 2-source subsource somewhere extractor with outputs
for entropy k, if for every 2 independent k-sources X, Y there
exists a subsource ^
X of X, a subsource ^
Y of Y and an i []
such the i
th
output f ( ^
X, ^
Y )
i
is a uniformly distributed string
of m bits.
A central technical result for us is that with this "sub-source"
relaxation, we can have much fewer outputs indeed
we'll replace poly(n) outputs in our first construction
above with n
o(1)
outputs.
Theorem 2.6
(Subsource somewhere extractor).
For every > 0 there is a polynomial time computable subsource
somewhere extractor SSE : ({0, 1}
n
)
2
({0,1}
m
)
with = n
o(1)
outputs, for entropy k = n
, with output
m = k.
We will describe the ideas used for constructing this important
object and analyzing it in the next section, where
we will also indicate how it is used in the construction of
the final disperser. Here we state a central building block,
mentioned in the previous section (as an improvement of the
work of Rao [?]). We construct an extractor for 2 independent
sources one of which is a block-sources with sufficient
number of blocks.
Theorem 2.7
(Block Source Extractor). There is
a polynomial time computable extractor B : ({0, 1}
n
)
2
{0, 1}
m
for 2 independent sources, one of which is a c-block-sources
with block entropy k and the other a source of entropy
k, with m = (k), and c = O((log n)/(log k)).
A simple corollary of this block-source extractor B, is the
following weaker (though useful) somewhere block-source
extractor SB. A source Z = Z
1
, Z
2
,
, Z
t
is a somewhere
c-block-source of block entropy k if for some c indices i
1
<
i
2
<
< i
c
the source Z
i
1
, Z
i
2
,
, Z
i
c
is a c-block-source.
Collecting the outputs of B on every c-subset of blocks results
in that somewhere extractor.
Corollary 2.8. There is a polynomial time computable
somewhere extractor SB : ({0, 1}
n
)
2
({0, 1}
m
)
for 2 independent
sources, one of which is a somewhere c-block-sources
with block entropy k and t blocks total and the other a source
of entropy k, with m = (k), c = O((log n)/(log k)), and
t
c
.
In both the theorem and corollary above, the values of
entropy k we will be interested in are k = n
(1)
. It follows
that a block-source with a constant c = O(1) suffices.
THE CHALLENGE-RESPONSE MECHANISM
We now describe abstractly a mechanism which will be
used in the construction of the disperser as well as the subsource
somewhere extractor. Intuitively, this mechanism allows
us to identify parts of a source which contain large
amounts of entropy. One can hope that using such a mechanism
one can partition a given source into blocks in a way
which make it a block-source, or alternatively focus on a part
of the source which is unusually condensed with entropy two
cases which may simplify the extraction problem.
The reader may decide, now or in the middle of this
section, to skip ahead to the next section which describes
the construction of the subsource somewhere extractor SSE,
which extensively uses this mechanism. Then this section
may seem less abstract, as it will be clearer where this mechanism
is used.
This mechanism was introduced by Barak et al. [4], and
was essential in their 2-source disperser. Its use in this paper
is far more involved (in particular it calls itself recursively,
a fact which creates many subtleties). However, at a high
level, the basic idea behind the mechanism is the same:
Let Z be a source and Z
a part of Z (Z projected on a
subset of the coordinates). We know that Z has entropy k,
674
and want to distinguish two possibilities: Z
has no entropy
(it is fixed) or it has at least k
entropy. Z
will get a pass
or fail grade, hopefully corresponding to the cases of high or
no entropy in Z
.
Anticipating the use of this mechanism, it is a good idea
to think of Z as a "parent" of Z
, which wants to check if
this "child" has sufficient entropy. Moreover, in the context
of the initial 2 sources X, Y we will operate on, think of Z
as a part of X, and thus that Y is independent of Z and Z
.
To execute this "test" we will compute two sets of strings
(all of length m, say): the Challenge C = C(Z
, Y ) and
the Response R = R(Z, Y ). Z
fails if C R and passes
otherwise.
The key to the usefulness of this mechanism is the following
lemma, which states that what "should" happen, indeed
happens after some restriction of the 2 sources Z and Y .
We state it and then explain how the functions C and R are
defined to accommodate its proof.
Lemma 3.1. Assume Z, Y are sources of entropy k.
1. If Z
has entropy k
+ O(m), then there are subsources
^
Z of Z and ^
Y of Y , such that
Pr[ ^
Z
passes] = Pr[C( ^
Z
, ^
Y )
R
( ^
Z, ^
Y )] 1-n
O(1)
2
-m
2. If Z
is fixed (namely, has zero entropy), then for some
subsources ^
Z of Z and ^
Y of Y , we have
Pr[Z
fails] = Pr[C( ^
Z
, ^
Y ) R( ^Z, ^Y)] = 1
Once we have such a mechanism, we will design our disperser
algorithm assuming that the challenge response mechanism
correctly identifies parts of the source with high or
low levels of entropy. Then in the analysis, we will ensure
that our algorithm succeeds in making the right decisions,
at least on subsources of the original input sources.
Now let us explain how to compute the sets C and R. We
will use some of the constructs above with parameters which
don't quite fit.
The response set R(Z, Y ) = pSE(Z, Y ) is chosen to be the
output of the somewhere extractor of Proposition 2.3. The
challenge set C(Z
, Y ) = SSE(Z
, Y ) is chosen to be the output
of the subsource somewhere extractor of Theorem 2.6.
Why does it work? We explain each of the two claims
in the lemma in turn (and after each comment on the important
parameters and how they differ from Barak et al.
[4]).
1. Z
has entropy. We need to show that Z
passes the
test with high probability. We will point to the output
string in C( ^
Z
, ^
Y
) which avoids R( ^
Z, ^
Y ) with high
probability as follows. In the analysis we will use the
union bound on several events, one associated with
each (poly(n) many) string in pSE( ^
Z, ^
Y ). We note
that by the definition of the response function, if we
want to fix a particular element in the response set to
a particular value, we can do this by fixing E(Z, i) and
E
(Y, i). This fixing keeps the restricted sources independent
and loses only O(m) entropy. In the subsource
of Z
guaranteed to exist by Theorem 2.6 we can afford
to lose this entropy in Z
. Thus we conclude that one
of its outputs is uniform. The probability that this
output will equal any fixed value is thus 2
-m
, completing
the argument. We note that we can handle
the polynomial output size of pSE, since the uniform
string has length m = n
(1)
(something which could
not be done with the technology available to Barak et
al. [4]).
2. Z
has no entropy. We now need to guarantee that
in the chosen subsources (which we choose) ^
Z, ^
Y , all
strings in C = C( ^
Z
, ^
Y ) are in R( ^
Z, ^
Y ). First notice
that as Z
is fixed, C is only a function of Y . We
set ~
Y to be the subsource of Y that fixes all strings
in C = C(Y ) to their most popular values (losing
only m entropy from Y ). We take care of including
these fixed strings in R(Z, ~
Y ) one at a time, by
restricting to subsources assuring that. Let be any
m-bit string we want to appear in R(Z, ~
Y ). Recall that
R
(z, y) = V(E(z, i), E(y, i)). We pick a "good" seed i,
and restrict Z, ~
Y to subsources with only O(m) less
entropy by fixing E(Z, i) = a and E( ~
Y , i) = b to values
(a, b) for which V(a, b) = . This is repeated suc-cessively
times, and results in the final subsources
^
Z, ^
Y on which ^
Z
fails with probability 1. Note that
we keep reducing the entropy of our sources times,
which necessitates that this be tiny (here we could
not tolerate poly(n), and indeed can guarantee n
o(1)
,
at least on a subsource this is one aspect of how crucial
the subsource somewhere extractor SSE is to the
construction.
We note that initially it seemed like the Challenge-Response
mechanism as used in [4] could not be used to handle entropy
that is significantly less than n (which is approxi-mately
the bound that many of the previous constructions
got stuck at). The techniques of [4] involved partitioning
the sources into t pieces of length n/t each, with the hope
that one of those parts would have a significant amount of
entropy, yet there'd be enough entropy left over in the rest
of the source (so that the source can be partitioned into a
block source).
However it is not clear how to do this when the total
entropy is less than n. On the one hand we will have
to partition our sources into blocks of length significantly
more than n (or the adversary could distribute a negligible
fraction of entropy in all blocks). On the other hand, if
our blocks are so large, a single block could contain all the
entropy. Thus it was not clear how to use the challenge
response mechanism to find a block source.
THE SUBSOURCE SOMEWHERE EXTRACTOR
SSE
We now explain some of the ideas behind the construction
of the subsource somewhere extractor SSE of Theorem 2.6.
Consider the source X. We are seeking to find in it a somewhere
c-block-source, so that we can use it (together with Y )
in the block-source extractor of Theorem 2.8. Like in previous
works in the extractor literature (e.g. [19, 13]) we use a
"win-win" analysis which shows that either X is already a
somewhere c-block-source, or it has a condensed part which
contains a lot of the entropy of the source. In this case we
proceed recursively on that part. Continuing this way we
eventually reach a source so condensed that it must be a
somewhere block source. Note that in [4], the challenge response
mechanism was used to find a block source also, but
there the entropy was so high that they could afford to use
675
t blocks
low
high
med
n bits total
t blocks
med
med
low
high
responded
Challenge
Challenge
responded
Challenge Unresponded
med
med
n/t bits total
SB
SB
Outputs
Somewhere Block Source!
Not Somewhere block source
X
Random Row
< k'
0< low < k'/t
k'/c < high < k'
k'/t < med < k'/c
Figure 1: Analysis of the subsource somewhere extractor.
a tree of depth 1. They did not need to recurse or condense
the sources.
Consider the tree of parts of the source X evolved by
such recursion. Each node in the tree corresponds to some
interval of bit locations of the source, with the root node
corresponding to the entire source. A node is a child of another
if its interval is a subinterval of the parent. It can be
shown that some node in the tree is "good"; it corresponds
to a somewhere c-source, but we don't know which node is
good. Since we only want a somewhere extractor, we can
apply to each node the somewhere block-source extractor of
Corollary 2.8 this will give us a random output in every
"good" node of the tree. The usual idea is output all these
values (and in seeded extractors, merge them using the ex-ternally
given random seed). However, we cannot afford to
do that here as there is no external seed and the number of
these outputs (the size of the tree) is far too large.
Our aim then will be to significantly prune this number
of candidates and in fact output only the candidates on one
path to a canonical "good" node. First we will give a very informal
description of how to do this (Figure 1). Before calling
SSE recursively on a subpart of a current part of X, we'll
use the "Challenge-Response" mechanism described above
to check if "it has entropy".
4
We will recurse only with the
first (in left-to-right order) part which passes the "entropy
test". Thus note that we will follow a single path on this
tree. The algorithm SSE will output only the sets of strings
produced by applying the somewhere c-block-extractor SB
on the parts visited along this path.
Now let us describe the algorithm for SSE. SSE will be
initially invoked as SSE(x, y), but will recursively call itself
with different inputs z which will always be substrings of x.
4
We note that we ignore the additional complication that
SSE
will actually use recursion also to compute the challenge
in the challenge-response mechanism.
Algorithm: SSE
(z, y)
Let pSE(., .) be the somewhere extractor with a polynomial
number of outputs of Proposition 2.3.
Let SB be the somewhere block source extractor of Corollary
2.8.
Global Parameters: t, the branching factor of the tree. k
the original entropy of the sources.
Output will be a set of strings.
1. If z is shorter than k, return the empty set, else
continue.
2. Partition z into t equal parts z = z
1
, z
2
, . . . , z
t
.
3. Compute the response set R(z, y) which is the set of
strings output by pSE(z, y).
4. For i [t], compute the challenge set C(z
i
, y), which
is the set of outputs of SSE(z
i
, y).
5. Let h be the smallest index for which the challenge set
C
(z
h
, y) is not contained in the response set (set h = t
if no such index exists).
6. Output SB(z, y) concatenated with SSE(z
h
, y).
Proving that indeed there are subsources on which SSE
will follow a path to a "good" (for these subsources) node,
is the heart of the analysis. It is especially complex due
to the fact that the recursive call to SSE on subparts of
the current part is used to generate the Challenges for the
Challenge-Response mechanism. Since SSE works only on
a subsources we have to guarantee that restriction to these
does not hamper the behavior of SSE in past and future calls
to it.
Let us turn to the highlights of the analysis, for the proof
of Theorem 2.6. Let k
be the entropy of the source Z at
some place in this recursion. Either one of its blocks Z
i
has
676
entropy k
/c, in which case it is very condensed, since its
size is n/t for t c), or it must be that c of its blocks form
a c-block source with block entropy k
/t (which is sufficient
for the extractor B used by SB). In the 2nd case the fact
that SB(z, y) is part of the output of of our SSE guarantees
that we are somewhere random. If the 2nd case doesn't hold,
let Z
i
be the leftmost condensed block. We want to ensure
that (on appropriate subsources) SSE calls itself on that ith
subpart. To do so, we fix all Z
j
for j < i to constants z
j
. We
are now in the position described in the Challenge-Response
mechanism section, that (in each of the first i parts) there
is either no entropy or lots of entropy. We further restrict
to subsources as explained there which make all first i - 1
blocks fail the "entropy test", and the fact that Z
i
still has
lots of entropy after these restrictions (which we need to
prove) ensures that indeed SSE will be recursively applied
to it.
We note that while the procedure SSE can be described recursively
, the formal analysis of fixing subsources is actually
done globally, to ensure that indeed all entropy requirements
are met along the various recursive calls.
Let us remark on the choice of the branching parameter t.
On the one hand, we'd like to keep it small, as it dominates
the number of outputs t
c
of SB, and thus the total number of
outputs (which is t
c
log
t
n). For this purpose, any t = n
o(1)
will do. On the other hand, t should be large enough so that
condensing is faster than losing entropy. Here note that if
Z is of length n, its child has length n/t, while the entropy
shrinks only from k
to k
/c. A simple calculation shows that
if k
(log t)/ log c)
> n
2
then a c block-source must exist along
such a path before the length shrinks to k. Note that for
k = n
(1)
a (large enough) constant t suffices (resulting in
only logarithmic number of outputs of SSE). This analysis
is depicted pictorially in Figure 1.
THE FINAL DISPERSER
D
Following is a rough description of our disperser D proving
Theorem 2.1. The high level structure of D will resemble the
structure of SSE - we will recursively split the source X and
look for entropy in the parts. However now we must output
a single value (rather than a set) which can take both values
0 and 1. This was problematic in SSE, even knowing where
the "good" part (containing a c-block-source) was! How can
we do so now?
We now have at our disposal a much more powerful tool
for generating challenges (and thus detecting entropy), namely
the subsource somewhere disperser SSE. Note that in constructing
SSE we only had essentially the somewhere c-block-source
extractor SB to (recursively) generate the challenges,
but it depended on a structural property of the block it was
applied on. Now SSE does not assume any structure on its
input sources except sufficient entropy
5
.
Let us now give a high level description of the disperser
D
. It too will be a recursive procedure. If when processing
some part Z of X it "realizes" that a subpart Z
i
of Z has
entropy, but not all the entropy of Z (namely Z
i
, Z is a
2-block-source) then we will halt and produce the output
of D. Intuitively, thinking about the Challenge-Response
mechanism described above, the analysis implies that we
5
There is a catch it only works on subsources of them!
This will cause us a lot of head ache; we will elaborate on it
later.
can either pass or fail Z
i
(on appropriate subsources). But
this means that the outcome of this "entropy test" is a 1-bit
disperser!
To capitalize on this idea, we want to use SSE to identify
such a block-source in the recursion tree. As before, we scan
the blocks from left to right, and want to distinguish three
possibilities.
low
Z
i
has low entropy. In this case we proceed to i + 1.
medium
Z
i
has "medium" entropy (Z
i
, Z is a block-source).
In which case we halt and produce an output (zero or
one).
high
Z
i
has essentially all entropy of Z. In this case we
recurse on the condensed block Z
i
.
As before, we use the Challenge-Response mechanism (with
a twist). We will compute challenges C(Z
i
, Y ) and responses
R
(Z, Y ), all strings of length m. The responses are computed
exactly as before, using the somewhere extractor pSE. The
Challenges are computed using our subsource somewhere
extractor SSE.
We really have 4 possibilities to distinguish, since when we
halt we also need to decide which output bit we give. We will
do so by deriving three tests from the above challenges and
responses: (C
H
, R
H
), (C
M
, R
M
), (C
L
, R
L
) for high, medium
and low respectively, as follows. Let m m
H
>> m
M
>>
m
L
be appropriate integers: then in each of the tests above
we restrict ourselves to prefixes of all strings of the appropriate
lengths only. So every string in C
M
will be a prefix
of length m
M
of some string in C
H
. Similarly, every string
in R
L
is the length m
L
prefix of some string in R
H
. Now
it is immediately clear that if C
M
is contained in R
M
, then
C
L
is contained in R
L
. Thus these tests are monotone, if
our sample fails the high test, it will definitely fail all tests.
Algorithm: D
(z, y)
Let pSE(., .) be the somewhere extractor with a polynomial
number of outputs of Proposition 2.3.
Let SSE(., .) be the subsource somewhere extractor of Theorem
2.6.
Global Parameters: t, the branching factor of the tree. k
the original entropy of the sources.
Local Parameters for recursive level: m
L
m
M
m
H
.
Output will be an element of {0, 1}.
1. If z is shorter than k, return 0.
2. Partition z into t equal parts z = z
1
, z
2
, . . . , z
t
.
3. Compute three response sets R
L
, R
M
, R
H
using pSE(z, y).
R
j
will be the prefixes of length m
j
of the strings in
pSE
(z, y).
4. For each i [t], compute three challenge sets C
i
L
, C
i
M
, C
i
H
using SSE(z
i
, y). C
i
j
will be the prefixes of length m
j
of the strings in SSE(z
i
, y).
5. Let h be the smallest index for which the challenge set
C
L
is not contained in the response set R
L
, if there is
no such index, output 0 and halt.
6. If C
h
H
is contained in R
H
and C
h
H
is contained in R
M
,
output 0 and halt. If C
h
H
is contained in R
H
but C
h
H
is not contained in R
M
, output 1 and halt.
677
t blocks
t blocks
t blocks
fail
fail
fail
pass
pass
pass
fail
fail
fail
fail
fail
fail
fail
fail
fail
fail
fail
fail
pass
pass
fail
pass
fail
fail
low
low
high
low
low
low
high
low
med
n bits total
n/t bits total
X
low
low
Output 0
Output 1
n/t^2 bits total
X_3
(X_3)_4
Figure 2: Analysis of the disperser.
7. Output D(z
h
, y),
First note the obvious monotonicity of the tests. If Z
i
fails
one of the tests it will certainly fail for shorter strings. Thus
there are only four outcomes to the three tests, written in the
order (low, medium, high): (pass, pass, pass), (pass, pass, fail),
(pass, fail, fail) and (fail, fail, fail).
Conceptually, the algorithm
is making the following decisions using the four tests:
1. (fail, fail, fail): Assume Z
i
has low entropy and proceed
to block i + 1.
2. (pass, fail, fail): Assume Z
i
is medium, halt and output
0.
3. (pass, pass, fail): Assume Z
i
is medium, halt and output
1.
4. (pass, pass, pass): Assume Z
i
is high and recurse on Z
i
.
The analysis of this idea (depicted in Figure 2).turns out
to be more complex than it seems. There are two reasons for
that. Now we briefly explain them and the way to overcome
them in the construction and analysis.
The first reason is the fact mentioned above, that SSE
which generates the challenges, works only on a subsources
of the original sources. Restricting to these subsources at
some level of the recursion (as required by the analysis of of
the test) causes entropy loss which affects both definitions
(such as these entropy thresholds for decisions) and correct-ness
of SSE in higher levels of recursion. Controlling this entropy
loss is achieved by calling SSE recursively with smaller
and smaller entropy requirements, which in turn limits the
entropy which will be lost by these restrictions. In order not
to lose all the entropy for this reason alone, we must work
with special parameters of SSE, essentially requiring that at
termination it has almost all the entropy it started with.
The second reason is the analysis of the test when we are
in a medium block. In contrast with the above situation, we
cannot consider the value of Z
i
fixed when we need it to fail
on the Medium and Low tests. We need to show that for
these two tests (given a pass for High), they come up both
(pass, fail) and (fail, fail) each with positive probability.
Since the length of Medium challenges and responses is
m
M
, the probability of failure is at least exp(-(m
M
)) (this
follows relatively easily from the fact that the responses are
somewhere random). If the Medium test fails so does the
Low test, and thus (fail, fail) has a positive probability and
our disperser D outputs 0 with positive probability.
To bound (pass, fail) we first observe (with a similar
reasoning) that the low test fails with probability at least
exp(-(m
L
)). But we want the medium test to pass at the
same time. This probability is at least the probability that
low
fails minus the probability that medium fails. We already
have a bound on the latter: it is at most poly(n)exp(-m
M
).
Here comes our control of the different length into play - we
can make the m
L
sufficiently smaller than m
M
to yield this
difference positive. We conclude that our disperser D outputs
1 with positive probability as well.
Finally, we need to take care of termination: we have to
ensure that the recurrence always arrives at a medium subpart
, but it is easy to chose entropy thresholds for low, medium
and high to ensure that this happens.
678
RESILIENCY AND DEFICIENCY
In this section we will breifly discuss an issue which arises
in our construction that we glossed over in the previous sections
. Recall our definition of subsources:
Definition 6.1
(Subsources). Given random variables
Z and ^
Z on {0, 1}
n
we say that ^
Z is a deficiency d subsource
of Z and write ^
Z Z if there exists a set A {0,1}
n
such
that (Z|A) = ^Z and Pr[Z A] 2
-d
.
Recall that we were able to guarantee that our algorithms
made the right decisions only on subsources of the original
source. For example, in the construction of our final disperser
, to ensure that our algorithms correctly identify the
right high block to recurse on, we were only able to guarantee
that there are subsources of the original sources in
which our algorithm makes the correct decision with high
probability. Then, later in the analysis we had to further
restrict the source to even smaller subsources. This leads to
complications, since the original event of picking the correct
high
block, which occurred with high probability, may become
an event which does not occur with high probability
in the current subsource. To handle these kinds of issues,
we will need to be very careful in measuring how small our
subsources are.
In the formal analysis we introduce the concept of resiliency
to deal with this. To give an idea of how this works,
here is the actual definition of somewhere subsource extractor
that we use in the formal analysis.
Definition 6.2
(subsource somewhere extractor).
A function SSE : {0, 1}
n
{0, 1}
n
({0, 1}
m
)
is a subsource
somewhere extractor with nrows output rows, entropy
threshold k, deficiency def, resiliency res and error if for
every (n, k)-sources X, Y there exist a deficiency def subsource
X
good
of X and a deficiency def subsource Y
good
of
Y such that for every deficiency res subsource X
of X
good
and deficiency res subsource Y
of Y
good
, the random variable
SSE(X
, Y
) is -close to a m somewhere random
distribution.
It turns out that our subsource somewhere extractor does
satisfy this stronger definition. The advantage of this definition
is that it says that once we restrict our attention to
the good subsources X
good
, Y
good
, we have the freedom to further
restrict these subsources to smaller subsources, as long
as our final subsources do not lose more entropy than the
resiliency permits.
This issue of managing the resiliency for the various objects
that we construct is one of the major technical challenges
that we had to overcome in our construction.
OPEN PROBLEMS
Better Independent Source Extractors
A bottleneck to
improving our disperser is the block versus general
source extractor of Theorem 2.7. A good next step
would be to try to build an extractor for one block
source (with only a constant number of blocks) and
one other independent source which works for polylog-arithmic
entropy, or even an extractor for a constant
number of sources that works for sub-polynomial entropy
.
Simple Dispersers
While our disperser is polynomial time
computable, it is not as explicit as one might have
hoped. For instance the Ramsey Graph construction
of Frankl-Wilson is extremely simple: For a prime p,
let the vertices of the graph be all subsets of [p
3
] of
size p
2
- 1. Two vertices S,T are adjacent if and only
if |S T| -1 mod p. It would be nice to find a good
disperser that beats the Frankl-Wilson construction,
yet is comparable in simplicity.
REFERENCES
[1] N. Alon. The shannon capacity of a union.
Combinatorica, 18, 1998.
[2] B. Barak. A simple explicit construction of an
n
~
o(log n)
-ramsey graph. Technical report, Arxiv, 2006.
http://arxiv.org/abs/math.CO/0601651
.
[3] B. Barak, R. Impagliazzo, and A. Wigderson.
Extracting randomness using few independent sources.
In Proceedings of the 45th Annual IEEE Symposium
on Foundations of Computer Science, pages 384393,
2004.
[4] B. Barak, G. Kindler, R. Shaltiel, B. Sudakov, and
A. Wigderson. Simulating independence: New
constructions of condensers, Ramsey graphs,
dispersers, and extractors. In Proceedings of the 37th
Annual ACM Symposium on Theory of Computing,
pages 110, 2005.
[5] J. Bourgain. More on the sum-product phenomenon in
prime fields and its applications. International Journal
of Number Theory, 1:132, 2005.
[6] J. Bourgain, N. Katz, and T. Tao. A sum-product
estimate in finite fields, and applications. Geometric
and Functional Analysis, 14:2757, 2004.
[7] M. Capalbo, O. Reingold, S. Vadhan, and
A. Wigderson. Randomness conductors and
constant-degree lossless expanders. In Proceedings of
the 34th Annual ACM Symposium on Theory of
Computing, pages 659668, 2002.
[8] B. Chor and O. Goldreich. Unbiased bits from sources
of weak randomness and probabilistic communication
complexity. SIAM Journal on Computing,
17(2):230261, 1988.
[9] P. Frankl and R. M. Wilson. Intersection theorems
with geometric consequences. Combinatorica,
1(4):357368, 1981.
[10] P. Gopalan. Constructing ramsey graphs from boolean
function representations. In Proceedings of the 21th
Annual IEEE Conference on Computational
Complexity, 2006.
[11] V. Grolmusz. Low rank co-diagonal matrices and
ramsey graphs. Electr. J. Comb, 7, 2000.
[12] V. Guruswami. Better extractors for better codes?
Electronic Colloquium on Computational Complexity
(ECCC), (080), 2003.
[13] C. J. Lu, O. Reingold, S. Vadhan, and A. Wigderson.
Extractors: Optimal up to constant factors. In
Proceedings of the 35th Annual ACM Symposium on
Theory of Computing, pages 602611, 2003.
[14] P. Miltersen, N. Nisan, S. Safra, and A. Wigderson.
On data structures and asymmetric communication
complexity. Journal of Computer and System
Sciences, 57:3749, 1 1998.
679
[15] N. Nisan and D. Zuckerman. More deterministic
simulation in logspace. In Proceedings of the 25th
Annual ACM Symposium on Theory of Computing,
pages 235244, 1993.
[16] P. Pudlak and V. Rodl. Pseudorandom sets and
explicit constructions of ramsey graphs. Submitted for
publication, 2004.
[17] A. Rao. Extractors for a constant number of
polynomially small min-entropy independent sources.
In Proceedings of the 38th Annual ACM Symposium
on Theory of Computing, 2006.
[18] R. Raz. Extractors with weak random seeds. In
Proceedings of the 37th Annual ACM Symposium on
Theory of Computing, pages 1120, 2005.
[19] O. Reingold, R. Shaltiel, and A. Wigderson.
Extracting randomness via repeated condensing. In
Proceedings of the 41st Annual IEEE Symposium on
Foundations of Computer Science, pages 2231, 2000.
[20] M. Santha and U. V. Vazirani. Generating
quasi-random sequences from semi-random sources.
Journal of Computer and System Sciences, 33:7587,
1986.
[21] R. Shaltiel. Recent developments in explicit
constructions of extractors. Bulletin of the European
Association for Theoretical Computer Science,
77:6795, 2002.
[22] A. Ta-Shma and D. Zuckerman. Extractor codes.
IEEE Transactions on Information Theory, 50, 2004.
[23] U. Vazirani. Towards a strong communication
complexity theory or generating quasi-random
sequences from two communicating slightly-random
sources (extended abstract). In Proceedings of the 17th
Annual ACM Symposium on Theory of Computing,
pages 366378, 1985.
[24] A. Wigderson and D. Zuckerman. Expanders that
beat the eigenvalue bound: Explicit construction and
applications. Combinatorica, 19(1):125138, 1999.
680
| sum-product theorem;distribution;explicit disperser;construction of disperser;Extractors;recursion;subsource somewhere extractor;structure;bipartite graph;extractors;independent sources;extractor;tools;Ramsey Graphs;disperser;polynomial time computable disperser;resiliency;Theorem;Ramsey graphs;block-sources;deficiency;termination;entropy;Ramsey graph;Independent Sources;algorithms;independent source;subsource;Dispersers;randomness extraction |
10 | A Frequency-based and a Poisson-based Definition of the Probability of Being Informative | This paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf ). We show that an intuitive idf -based probability function for the probability of a term being informative assumes disjoint document events. By assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative. The framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models. | INTRODUCTION AND BACKGROUND
The inverse document frequency (idf ) is one of the most
successful parameters for a relevance-based ranking of retrieved
objects. With N being the total number of documents
, and n(t) being the number of documents in which
term t occurs, the idf is defined as follows:
idf(t) := - log n(t)
N , 0 <= idf(t) <
Ranking based on the sum of the idf -values of the query
terms that occur in the retrieved documents works well, this
has been shown in numerous applications. Also, it is well
known that the combination of a document-specific term
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGIR'03, July 28August 1, 2003, Toronto, Canada.
Copyright 2003 ACM 1-58113-646-3/03/0007 ...
$
5.00.
weight and idf works better than idf alone. This approach
is known as tf-idf , where tf(t, d) (0 <= tf(t, d) <= 1) is
the so-called term frequency of term t in document d. The
idf reflects the discriminating power (informativeness) of a
term, whereas the tf reflects the occurrence of a term.
The idf alone works better than the tf alone does. An explanation
might be the problem of tf with terms that occur
in many documents; let us refer to those terms as "noisy"
terms. We use the notion of "noisy" terms rather than "fre-quent"
terms since frequent terms leaves open whether we
refer to the document frequency of a term in a collection or
to the so-called term frequency (also referred to as within-document
frequency) of a term in a document. We associate
"noise" with the document frequency of a term in a
collection, and we associate "occurrence" with the within-document
frequency of a term. The tf of a noisy term might
be high in a document, but noisy terms are not good candidates
for representing a document. Therefore, the removal
of noisy terms (known as "stopword removal") is essential
when applying tf . In a tf-idf approach, the removal of stopwords
is conceptually obsolete, if stopwords are just words
with a low idf .
From a probabilistic point of view, tf is a value with a
frequency-based probabilistic interpretation whereas idf has
an "informative" rather than a probabilistic interpretation.
The missing probabilistic interpretation of idf is a problem
in probabilistic retrieval models where we combine uncertain
knowledge of different dimensions (e.g.: informativeness of
terms, structure of documents, quality of documents, age
of documents, etc.) such that a good estimate of the probability
of relevance is achieved. An intuitive solution is a
normalisation of idf such that we obtain values in the interval
[0; 1]. For example, consider a normalisation based on
the maximal idf -value. Let T be the set of terms occurring
in a collection.
P
freq
(t is informative) := idf(t)
maxidf
maxidf := max(
{idf(t)|t T }), maxidf <= - log(1/N)
minidf := min(
{idf(t)|t T }), minidf >= 0
minidf
maxidf P
freq
(t is informative) 1.0
This frequency-based probability function covers the interval
[0; 1] if the minimal idf is equal to zero, which is the case
if we have at least one term that occurs in all documents.
Can we interpret P
freq
, the normalised idf , as the probability
that the term is informative?
When investigating the probabilistic interpretation of the
227
normalised idf , we made several observations related to disjointness
and independence of document events. These observations
are reported in section 3. We show in section 3.1
that the frequency-based noise probability
n(t)
N
used in the
classic idf -definition can be explained by three assumptions:
binary term occurrence, constant document containment and
disjointness of document containment events. In section 3.2
we show that by assuming independence of documents, we
obtain 1
- e
-1
1 - 0.37 as the upper bound of the noise
probability of a term. The value e
-1
is related to the logarithm
and we investigate in section 3.3 the link to information
theory. In section 4, we link the results of the previous
sections to probability theory. We show the steps from possible
worlds to binomial distribution and Poisson distribution.
In section 5, we emphasise that the theoretical framework
of this paper is applicable for both idf and tf . Finally, in
section 6, we base the definition of the probability of being
informative on the results of the previous sections and
compare frequency-based and Poisson-based definitions.
BACKGROUND
The relationship between frequencies, probabilities and
information theory (entropy) has been the focus of many
researchers. In this background section, we focus on work
that investigates the application of the Poisson distribution
in IR since a main part of the work presented in this paper
addresses the underlying assumptions of Poisson.
[4] proposes a 2-Poisson model that takes into account
the different nature of relevant and non-relevant documents,
rare terms (content words) and frequent terms (noisy terms,
function words, stopwords). [9] shows experimentally that
most of the terms (words) in a collection are distributed
according to a low dimension n-Poisson model. [10] uses a
2-Poisson model for including term frequency-based probabilities
in the probabilistic retrieval model. The non-linear
scaling of the Poisson function showed significant improvement
compared to a linear frequency-based probability. The
Poisson model was here applied to the term frequency of a
term in a document. We will generalise the discussion by
pointing out that document frequency and term frequency
are dual parameters in the collection space and the document
space, respectively. Our discussion of the Poisson distribution
focuses on the document frequency in a collection
rather than on the term frequency in a document.
[7] and [6] address the deviation of idf and Poisson, and
apply Poisson mixtures to achieve better Poisson-based estimates
. The results proved again experimentally that a one-dimensional
Poisson does not work for rare terms, therefore
Poisson mixtures and additional parameters are proposed.
[3], section 3.3, illustrates and summarises comprehen-sively
the relationships between frequencies, probabilities
and Poisson. Different definitions of idf are put into context
and a notion of "noise" is defined, where noise is viewed
as the complement of idf . We use in our paper a different
notion of noise: we consider a frequency-based noise that
corresponds to the document frequency, and we consider a
term noise that is based on the independence of document
events.
[11], [12], [8] and [1] link frequencies and probability estimation
to information theory. [12] establishes a framework
in which information retrieval models are formalised based
on probabilistic inference. A key component is the use of a
space of disjoint events, where the framework mainly uses
terms as disjoint events. The probability of being informative
defined in our paper can be viewed as the probability
of the disjoint terms in the term space of [12].
[8] address entropy and bibliometric distributions. Entropy
is maximal if all events are equiprobable and the frequency
-based Lotka law (N/i
is the number of scientists
that have written i publications, where N and are distribution
parameters), Zipf and the Pareto distribution are related
. The Pareto distribution is the continuous case of the
Lotka and Lotka and Zipf show equivalences. The Pareto
distribution is used by [2] for term frequency normalisation.
The Pareto distribution compares to the Poisson distribution
in the sense that Pareto is "fat-tailed", i. e. Pareto assigns
larger probabilities to large numbers of events than
Poisson distributions do.
This makes Pareto interesting
since Poisson is felt to be too radical on frequent events.
We restrict in this paper to the discussion of Poisson, however
, our results show that indeed a smoother distribution
than Poisson promises to be a good candidate for improving
the estimation of probabilities in information retrieval.
[1] establishes a theoretical link between tf-idf and information
theory and the theoretical research on the meaning
of tf-idf "clarifies the statistical model on which the different
measures are commonly based". This motivation matches
the motivation of our paper: We investigate theoretically
the assumptions of classical idf and Poisson for a better
understanding of parameter estimation and combination.
FROM DISJOINT TO INDEPENDENT
We define and discuss in this section three probabilities:
The frequency-based noise probability (definition 1), the total
noise probability for disjoint documents (definition 2).
and the noise probability for independent documents (definition
3).
3.1
Binary occurrence, constant containment
and disjointness of documents
We show in this section, that the frequency-based noise
probability
n(t)
N
in the idf definition can be explained as
a total probability with binary term occurrence, constant
document containment and disjointness of document containments
.
We refer to a probability function as binary if for all events
the probability is either 1.0 or 0.0. The occurrence probability
P (t|d) is binary, if P (t|d) is equal to 1.0 if t d, and
P (t|d) is equal to 0.0, otherwise.
P (t|d) is binary : P (t|d) = 1.0 P (t|d) = 0.0
We refer to a probability function as constant if for all
events the probability is equal. The document containment
probability reflect the chance that a document occurs in a
collection. This containment probability is constant if we
have no information about the document containment or
we ignore that documents differ in containment. Containment
could be derived, for example, from the size, quality,
age, links, etc. of a document. For a constant containment
in a collection with N documents,
1
N
is often assumed as
the containment probability. We generalise this definition
and introduce the constant where 0 N. The containment
of a document d depends on the collection c, this
is reflected by the notation P (d|c) used for the containment
228
of a document.
P (d|c) is constant : d : P (d|c) =
N
For disjoint documents that cover the whole event space,
we set = 1 and obtain
d
P (d|c) = 1.0. Next, we define
the frequency-based noise probability and the total noise
probability for disjoint documents. We introduce the event
notation t is noisy and t occurs for making the difference
between the noise probability P (t is noisy|c) in a collection
and the occurrence probability P (t occurs|d) in a document
more explicit, thereby keeping in mind that the noise probability
corresponds to the occurrence probability of a term
in a collection.
Definition 1. The frequency-based term noise probability
:
P
freq
(t is noisy|c) := n(t)
N
Definition 2. The total term noise probability for
disjoint documents:
P
dis
(t is noisy|c) :=
d
P (t occurs|d) P (d|c)
Now, we can formulate a theorem that makes assumptions
explicit that explain the classical idf .
Theorem 1. IDF assumptions: If the occurrence probability
P (t|d) of term t over documents d is binary, and
the containment probability P (d|c) of documents d is constant
, and document containments are disjoint events, then
the noise probability for disjoint documents is equal to the
frequency-based noise probability.
P
dis
(t is noisy|c) = P
freq
(t is noisy|c)
Proof. The assumptions are:
d : (P (t occurs|d) = 1 P (t occurs|d) = 0)
P (d|c) =
N
d
P (d|c) = 1.0
We obtain:
P
dis
(t is noisy|c) =
d|td
1
N =
n(t)
N = P
freq
(t is noisy|c)
The above result is not a surprise but it is a mathematical
formulation of assumptions that can be used to explain
the classical idf . The assumptions make explicit that the
different types of term occurrence in documents (frequency
of a term, importance of a term, position of a term, document
part where the term occurs, etc.) and the different
types of document containment (size, quality, age, etc.) are
ignored, and document containments are considered as disjoint
events.
From the assumptions, we can conclude that idf (frequency-based
noise, respectively) is a relatively simple but strict
estimate.
Still, idf works well.
This could be explained
by a leverage effect that justifies the binary occurrence and
constant containment: The term occurrence for small documents
tends to be larger than for large documents, whereas
the containment for small documents tends to be smaller
than for large documents.
From that point of view, idf
means that P (t d|c) is constant for all d in which t occurs,
and P (t d|c) is zero otherwise. The occurrence and containment
can be term specific. For example, set P (t d|c) =
1/N
D
(c) if t occurs in d, where N
D
(c) is the number of documents
in collection c (we used before just N). We choose a
document-dependent occurrence P (t|d) := 1/N
T
(d), i. e. the
occurrence probability is equal to the inverse of N
T
(d), which
is the total number of terms in document d. Next, we choose
the containment P (d|c) := N
T
(d)/N
T
(c)N
T
(c)/N
D
(c) where
N
T
(d)/N
T
(c) is a document length normalisation (number
of terms in document d divided by the number of terms in
collection c), and N
T
(c)/N
D
(c) is a constant factor of the
collection (number of terms in collection c divided by the
number of documents in collection c). We obtain P (td|c) =
1/N
D
(c).
In a tf-idf -retrieval function, the tf -component reflects
the occurrence probability of a term in a document. This is
a further explanation why we can estimate the idf with a
simple P (t|d), since the combined tf-idf contains the occurrence
probability. The containment probability corresponds
to a document normalisation (document length normalisation
, pivoted document length) and is normally attached to
the tf -component or the tf-idf -product.
The disjointness assumption is typical for frequency-based
probabilities. From a probability theory point of view, we
can consider documents as disjoint events, in order to achieve
a sound theoretical model for explaining the classical idf .
But does disjointness reflect the real world where the containment
of a document appears to be independent of the
containment of another document? In the next section, we
replace the disjointness assumption by the independence assumption
.
3.2
The upper bound of the noise probability
for independent documents
For independent documents, we compute the probability
of a disjunction as usual, namely as the complement of the
probability of the conjunction of the negated events:
P (d
1
. . . d
N
)
=
1
- P (d
1
. . . d
N
)
=
1
d
(1
- P (d))
The noise probability can be considered as the conjunction
of the term occurrence and the document containment.
P (t is noisy|c) := P (t occurs (d
1
. . . d
N
)
|c)
For disjoint documents, this view of the noise probability
led to definition 2. For independent documents, we use now
the conjunction of negated events.
Definition 3. The term noise probability for independent
documents:
P
in
(t is noisy|c) :=
d
(1
- P (t occurs|d) P (d|c))
With binary occurrence and a constant containment P (d|c) :=
/N, we obtain the term noise of a term t that occurs in n(t)
documents:
P
in
(t is noisy|c) = 1 - 1 N
n(t)
229
For binary occurrence and disjoint documents, the containment
probability was 1/N. Now, with independent documents
, we can use as a collection parameter that controls
the average containment probability. We show through the
next theorem that the upper bound of the noise probability
depends on .
Theorem 2. The upper bound of being noisy: If the
occurrence P (t|d) is binary, and the containment P (d|c)
is constant, and document containments are independent
events, then 1
- e
is
the upper bound of the noise probability
.
t : P
in
(t is noisy|c) < 1 - e
Proof
. The upper bound of the independent noise probability
follows from the limit lim
N
(1 +
x
N
)
N
= e
x
(see
any comprehensive math book, for example, [5], for the convergence
equation of the Euler function). With x = -, we
obtain:
lim
N
1
N
N
= e
For
the term noise, we have:
P
in
(t is noisy|c) = 1 - 1 N
n(t)
P
in
(t is noisy|c) is strictly monotonous: The noise of a term
t
n
is less than the noise of a term t
n+1
, where t
n
occurs in
n documents and t
n+1
occurs in n + 1 documents. Therefore
, a term with n = N has the largest noise probability.
For a collection with infinite many documents, the upper
bound of the noise probability for terms t
N
that occur in all
documents becomes:
lim
N
P
in
(t
N
is noisy)
=
lim
N
1
- 1 N
N
=
1
- e
By
applying an independence rather a disjointness assumption
, we obtain the probability e
-1
that a term is not noisy
even if the term does occur in all documents. In the disjoint
case, the noise probability is one for a term that occurs in
all documents.
If we view P (d|c) := /N as the average containment,
then is large for a term that occurs mostly in large documents
, and is small for a term that occurs mostly in small
documents. Thus, the noise of a term t is large if t occurs in
n(t) large documents and the noise is smaller if t occurs in
small documents. Alternatively, we can assume a constant
containment and a term-dependent occurrence. If we assume
P (d|c) := 1, then P (t|d) := /N can be interpreted as
the average probability that t represents a document. The
common assumption is that the average containment or occurrence
probability is proportional to n(t). However, here
is additional potential: The statistical laws (see [3] on Luhn
and Zipf) indicate that the average probability could follow
a normal distribution, i. e. small probabilities for small n(t)
and large n(t), and larger probabilities for medium n(t).
For the monotonous case we investigate here, the noise of
a term with n(t) = 1 is equal to 1 - (1 - /N) = /N and
the noise of a term with n(t) = N is close to 1 - e
. In the
next section, we relate the value e
to
information theory.
3.3
The probability of a maximal informative
signal
The probability e
-1
is special in the sense that a signal
with that probability is a signal with maximal information as
derived from the entropy definition. Consider the definition
of the entropy contribution H(t) of a signal t.
H(t) := P (t) - ln P (t)
We form the first derivation for computing the optimum.
H(t)
P (t)
=
- ln P (t) + -1
P (t) P (t)
=
-(1 + ln P (t))
For obtaining optima, we use:
0 =
-(1 + ln P (t))
The entropy contribution H(t) is maximal for P (t) = e
-1
.
This result does not depend on the base of the logarithm as
we see next:
H(t)
P (t) = - log
b
P (t) +
-1
P (t) ln b P (t)
=
1
ln b + log
b
P (t) = 1
+ ln P (t)
ln b
We summarise this result in the following theorem:
Theorem 3. The probability of a maximal informative
signal: The probability P
max
= e
-1
0.37 is the probability
of a maximal informative signal. The entropy of a
maximal informative signal is H
max
= e
-1
.
Proof. The probability and entropy follow from the derivation
above.
The complement of the maximal noise probability is e
and
we are looking now for a generalisation of the entropy
definition such that e
is
the probability of a maximal informative
signal. We can generalise the entropy definition
by computing the integral of + ln P (t), i. e. this derivation
is zero for e
. We obtain a generalised entropy:
-( + ln P (t)) d(P (t)) = P (t) (1 - - ln P (t))
The generalised entropy corresponds for = 1 to the classical
entropy. By moving from disjoint to independent documents
, we have established a link between the complement
of the noise probability of a term that occurs in all documents
and information theory. Next, we link independent
documents to probability theory.
THE LINK TO PROBABILITY THEORY
We review for independent documents three concepts of
probability theory: possible worlds, binomial distribution
and Poisson distribution.
4.1
Possible Worlds
Each conjunction of document events (for each document,
we consider two document events: the document can be
true or false) is associated with a so-called possible world.
For example, consider the eight possible worlds for three
documents (N = 3).
230
world w
conjunction
w
7
d
1
d
2
d
3
w
6
d
1
d
2
d
3
w
5
d
1
d
2
d
3
w
4
d
1
d
2
d
3
w
3
d
1
d
2
d
3
w
2
d
1
d
2
d
3
w
1
d
1
d
2
d
3
w
0
d
1
d
2
d
3
With each world w, we associate a probability (w), which
is equal to the product of the single probabilities of the document
events.
world w probability (w)
w
7
N
3
1
N
0
w
6
N
2
1
N
1
w
5
N
2
1
N
1
w
4
N
1
1
N
2
w
3
N
2
1
N
1
w
2
N
1
1
N
2
w
1
N
1
1
N
2
w
0
N
0
1
N
3
The sum over the possible worlds in which k documents are
true and N -k documents are false is equal to the probability
function of the binomial distribution, since the binomial
coefficient yields the number of possible worlds in which k
documents are true.
4.2
Binomial distribution
The binomial probability function yields the probability
that k of N events are true where each event is true with
the single event probability p.
P (k) := binom(N, k, p) :=
N
k
p
k
(1
- p)N -k
The single event probability is usually defined as p := /N,
i. e. p is inversely proportional to N, the total number of
events. With this definition of p, we obtain for an infinite
number of documents the following limit for the product of
the binomial coefficient and p
k
:
lim
N
N
k
p
k
=
=
lim
N
N (N -1) . . . (N -k +1)
k!
N
k
=
k
k!
The limit is close to the actual value for k << N. For large
k, the actual value is smaller than the limit.
The limit of (1
-p)N -k follows from the limit lim
N
(1+
x
N
)
N
= e
x
.
lim
N
(1
- p)
N-k
= lim
N
1
N
N -k
=
lim
N
e
1 N
-k
= e
Again
, the limit is close to the actual value for k << N. For
large k, the actual value is larger than the limit.
4.3
Poisson distribution
For an infinite number of events, the Poisson probability
function is the limit of the binomial probability function.
lim
N
binom(N, k, p) =
k
k! e
P
(k) = poisson(k, ) :=
k
k! e
The
probability poisson (0, 1) is equal to e
-1
, which is the
probability of a maximal informative signal.
This shows
the relationship of the Poisson distribution and information
theory.
After seeing the convergence of the binomial distribution,
we can choose the Poisson distribution as an approximation
of the independent term noise probability. First, we define
the Poisson noise probability:
Definition 4. The Poisson term noise probability:
P
poi
(t is noisy|c) := e
n(t)
k=1
k
k!
For independent documents, the Poisson distribution approximates
the probability of the disjunction for large n(t),
since the independent term noise probability is equal to the
sum over the binomial probabilities where at least one of
n(t) document containment events is true.
P
in
(t is noisy|c) =
n(t)
k=1
n(t)
k
p
k
(1
- p)N -k
P
in
(t is noisy|c) P
poi
(t is noisy|c)
We have defined a frequency-based and a Poisson-based probability
of being noisy, where the latter is the limit of the
independence-based probability of being noisy. Before we
present in the final section the usage of the noise probability
for defining the probability of being informative, we
emphasise in the next section that the results apply to the
collection space as well as to the the document space.
THE COLLECTION SPACE AND THE DOCUMENT SPACE
Consider the dual definitions of retrieval parameters in
table 1. We associate a collection space D T with a collection
c where D is the set of documents and T is the set
of terms in the collection. Let N
D
:=
|D| and N
T
:=
|T |
be the number of documents and terms, respectively. We
consider a document as a subset of T and a term as a subset
of D. Let n
T
(d) := |{t|d t}| be the number of terms that
occur in the document d, and let n
D
(t) := |{d|t d}| be the
number of documents that contain the term t.
In a dual way, we associate a document space L T with
a document d where L is the set of locations (also referred
to as positions, however, we use the letters L and l and not
P and p for avoiding confusion with probabilities) and T is
the set of terms in the document. The document dimension
in a collection space corresponds to the location (position)
dimension in a document space.
The definition makes explicit that the classical notion of
term frequency of a term in a document (also referred to as
the within-document term frequency) actually corresponds
to the location frequency of a term in a document. For the
231
space
collection
document
dimensions
documents and terms
locations and terms
document/location
frequency
n
D
(t, c): Number of documents in which term t
occurs in collection c
n
L
(t, d): Number of locations (positions) at which
term t occurs in document d
N
D
(c): Number of documents in collection c
N
L
(d): Number of locations (positions) in document
d
term frequency
n
T
(d, c): Number of terms that document d contains
in collection c
n
T
(l, d): Number of terms that location l contains
in document d
N
T
(c): Number of terms in collection c
N
T
(d): Number of terms in document d
noise/occurrence
P (t|c) (term noise)
P (t|d) (term occurrence)
containment
P (d|c) (document)
P (l|d) (location)
informativeness
- ln P (t|c)
- ln P (t|d)
conciseness
- ln P (d|c)
- ln P (l|d)
P(informative)
ln(P (t|c))/ ln(P (t
min
, c))
ln(P (t|d))/ ln(P (t
min
, d))
P(concise)
ln(P (d|c))/ ln(P (d
min
|c))
ln(P (l|d))/ ln(P (l
min
|d))
Table 1: Retrieval parameters
actual term frequency value, it is common to use the maximal
occurrence (number of locations; let lf be the location
frequency).
tf(t, d) := lf(t, d) := P
freq
(t occurs|d)
P
freq
(t
max
occurs
|d) =
n
L
(t, d)
n
L
(t
max
, d)
A further duality is between informativeness and conciseness
(shortness of documents or locations): informativeness
is based on occurrence (noise), conciseness is based on containment
.
We have highlighted in this section the duality between
the collection space and the document space. We concentrate
in this paper on the probability of a term to be noisy
and informative. Those probabilities are defined in the collection
space. However, the results regarding the term noise
and informativeness apply to their dual counterparts: term
occurrence and informativeness in a document. Also, the
results can be applied to containment of documents and locations
THE PROBABILITY OF BEING INFORMATIVE
We showed in the previous sections that the disjointness
assumption leads to frequency-based probabilities and that
the independence assumption leads to Poisson probabilities.
In this section, we formulate a frequency-based definition
and a Poisson-based definition of the probability of being
informative and then we compare the two definitions.
Definition 5. The frequency-based probability of being
informative:
P
freq
(t is informative|c) := - ln
n(t)
N
- ln
1
N
=
- log
N
n(t)
N = 1 - log
N
n(t) = 1 - ln n(t)
ln N
We define the Poisson-based probability of being informative
analogously to the frequency-based probability of being
informative (see definition 5).
Definition 6. The Poisson-based probability of being
informative:
P
poi
(t is informative|c) := ln
e
n(t)
k=1
k
k!
- ln(e
)
=
- ln
n(t)
k=1
k
k!
- ln
For the sum expression, the following limit holds:
lim
n(t)
n(t)
k=1
k
k! = e
- 1
For >> 1, we can alter the noise and informativeness Poisson
by starting the sum from 0, since e
>> 1. Then, the
minimal Poisson informativeness is poisson(0, ) = e
. We
obtain a simplified Poisson probability of being informative:
P
poi
(t is informative|c) - ln
n(t)
k=0
k
k!
=
1
- ln
n(t)
k=0
k
k!
The computation of the Poisson sum requires an optimi-sation
for large n(t). The implementation for this paper
exploits the nature of the Poisson density: The Poisson density
yields only values significantly greater than zero in an
interval around .
Consider the illustration of the noise and informativeness
definitions in figure 1. The probability functions displayed
are summarised in figure 2 where the simplified Poisson
is used in the noise and informativeness graphs. The
frequency-based noise corresponds to the linear solid curve
in the noise figure. With an independence assumption, we
obtain the curve in the lower triangle of the noise figure. By
changing the parameter p := /N of the independence probability
, we can lift or lower the independence curve. The
noise figure shows the lifting for the value := ln N
9.2. The setting = ln N is special in the sense that the
frequency-based and the Poisson-based informativeness have
the same denominator, namely ln N, and the Poisson sum
converges to . Whether we can draw more conclusions from
this setting is an open question.
We can conclude, that the lifting is desirable if we know
for a collection that terms that occur in relatively few doc-232
0
0.2
0.4
0.6
0.8
1
0
2000
4000
6000
8000
10000
Probability of being noisy
n(t): Number of documents with term t
frequency
independence: 1/N
independence: ln(N)/N
poisson: 1000
poisson: 2000
poisson: 1000,2000
0
0.2
0.4
0.6
0.8
1
0
2000
4000
6000
8000
10000
Probability of being informative
n(t): Number of documents with term t
frequency
independence: 1/N
independence: ln(N)/N
poisson: 1000
poisson: 2000
poisson: 1000,2000
Figure 1: Noise and Informativeness
Probability function
Noise
Informativeness
Frequency P
freq
Def
n(t)/N
ln(n(t)/N)/ ln(1/N)
Interval
1/N P
freq
1.0
0.0 P
freq
1.0
Independence P
in
Def
1
- (1 - p)
n(t)
ln(1
- (1 - p)
n(t)
)/ ln(p)
Interval
p P
in
< 1 - e
ln
(p) P
in
1.0
Poisson P
poi
Def
e
n(t)
k=1
k
k!
( - ln
n(t)
k=1
k
k!
)/( - ln )
Interval
e
P
poi
< 1 - e
( - ln(e
- 1))/( - ln ) P
poi
1.0
Poisson P
poi
simplified
Def
e
n(t)
k=0
k
k!
( - ln
n(t)
k=0
k
k!
)/
Interval
e
P
poi
< 1.0
0.0 < P
poi
1.0
Figure 2: Probability functions
uments are no guarantee for finding relevant documents,
i. e. we assume that rare terms are still relatively noisy. On
the opposite, we could lower the curve when assuming that
frequent terms are not too noisy, i. e. they are considered as
being still significantly discriminative.
The Poisson probabilities approximate the independence
probabilities for large n(t); the approximation is better for
larger . For n(t) < , the noise is zero whereas for n(t) >
the noise is one. This radical behaviour can be smoothened
by using a multi-dimensional Poisson distribution. Figure 1
shows a Poisson noise based on a two-dimensional Poisson:
poisson(k,
1
,
2
) := e
1
k
1
k! + (1 - ) e
2
k
2
k!
The two dimensional Poisson shows a plateau between
1
=
1000 and
2
= 2000, we used here = 0.5. The idea behind
this setting is that terms that occur in less than 1000
documents are considered to be not noisy (i.e. they are informative
), that terms between 1000 and 2000 are half noisy,
and that terms with more than 2000 are definitely noisy.
For the informativeness, we observe that the radical behaviour
of Poisson is preserved. The plateau here is approximately
at 1/6, and it is important to realise that this
plateau is not obtained with the multi-dimensional Poisson
noise using = 0.5. The logarithm of the noise is normalised
by the logarithm of a very small number, namely
0.5 e
-1000
+ 0.5 e
-2000
. That is why the informativeness
will be only close to one for very little noise, whereas for a
bit of noise, informativeness will drop to zero. This effect
can be controlled by using small values for such that the
noise in the interval [
1
;
2
] is still very little. The setting
= e
-2000/6
leads to noise values of approximately e
-2000/6
in the interval [
1
;
2
], the logarithms lead then to 1/6 for
the informativeness.
The indepence-based and frequency-based informativeness
functions do not differ as much as the noise functions do.
However, for the indepence-based probability of being informative
, we can control the average informativeness by the
definition p := /N whereas the control on the frequency-based
is limited as we address next.
For the frequency-based idf , the gradient is monotonously
decreasing and we obtain for different collections the same
distances of idf -values, i. e. the parameter N does not affect
the distance. For an illustration, consider the distance between
the value idf(t
n+1
) of a term t
n+1
that occurs in n+1
documents, and the value idf(t
n
) of a term t
n
that occurs in
n documents.
idf(t
n+1
)
- idf(t
n
)
=
ln
n
n + 1
The first three values of the distance function are:
idf(t
2
)
- idf(t
1
) = ln(1/(1 + 1)) = 0.69
idf(t
3
)
- idf(t
2
) = ln(1/(2 + 1)) = 0.41
idf(t
4
)
- idf(t
3
) = ln(1/(3 + 1)) = 0.29
For the Poisson-based informativeness, the gradient decreases
first slowly for small n(t), then rapidly near n(t) and
then it grows again slowly for large n(t).
In conclusion, we have seen that the Poisson-based definition
provides more control and parameter possibilities than
233
the frequency-based definition does. Whereas more control
and parameter promises to be positive for the personalisa-tion
of retrieval systems, it bears at the same time the danger
of just too many parameters. The framework presented
in this paper raises the awareness about the probabilistic
and information-theoretic meanings of the parameters. The
parallel definitions of the frequency-based probability and
the Poisson-based probability of being informative made
the underlying assumptions explicit. The frequency-based
probability can be explained by binary occurrence, constant
containment and disjointness of documents. Independence
of documents leads to Poisson, where we have to be aware
that Poisson approximates the probability of a disjunction
for a large number of events, but not for a small number.
This theoretical result explains why experimental investigations
on Poisson (see [7]) show that a Poisson estimation
does work better for frequent (bad, noisy) terms than for
rare (good, informative) terms.
In addition to the collection-wide parameter setting, the
framework presented here allows for document-dependent
settings, as explained for the independence probability. This
is in particular interesting for heterogeneous and structured
collections, since documents are different in nature (size,
quality, root document, sub document), and therefore, binary
occurrence and constant containment are less appropriate
than in relatively homogeneous collections.
SUMMARY
The definition of the probability of being informative transforms
the informative interpretation of the idf into a probabilistic
interpretation, and we can use the idf -based probability
in probabilistic retrieval approaches. We showed that
the classical definition of the noise (document frequency) in
the inverse document frequency can be explained by three
assumptions: the term within-document occurrence probability
is binary, the document containment probability is
constant, and the document containment events are disjoint.
By explicitly and mathematically formulating the assumptions
, we showed that the classical definition of idf does not
take into account parameters such as the different nature
(size, quality, structure, etc.) of documents in a collection,
or the different nature of terms (coverage, importance, position
, etc.) in a document. We discussed that the absence
of those parameters is compensated by a leverage effect of
the within-document term occurrence probability and the
document containment probability.
By applying an independence rather a disjointness assumption
for the document containment, we could establish
a link between the noise probability (term occurrence
in a collection), information theory and Poisson. From the
frequency-based and the Poisson-based probabilities of being
noisy, we derived the frequency-based and Poisson-based
probabilities of being informative. The frequency-based probability
is relatively smooth whereas the Poisson probability
is radical in distinguishing between noisy or not noisy, and
informative or not informative, respectively. We showed how
to smoothen the radical behaviour of Poisson with a multi-dimensional
Poisson.
The explicit and mathematical formulation of idf - and
Poisson-assumptions is the main result of this paper. Also,
the paper emphasises the duality of idf and tf , collection
space and document space, respectively. Thus, the result
applies to term occurrence and document containment in a
collection, and it applies to term occurrence and position
containment in a document. This theoretical framework is
useful for understanding and deciding the parameter estimation
and combination in probabilistic retrieval models. The
links between indepence-based noise as document frequency,
probabilistic interpretation of idf , information theory and
Poisson described in this paper may lead to variable probabilistic
idf and tf definitions and combinations as required
in advanced and personalised information retrieval systems.
Acknowledgment: I would like to thank Mounia Lalmas,
Gabriella Kazai and Theodora Tsikrika for their comments
on the as they said "heavy" pieces. My thanks also go to the
meta-reviewer who advised me to improve the presentation
to make it less "formidable" and more accessible for those
"without a theoretic bent".
This work was funded by a
research fellowship from Queen Mary University of London.
REFERENCES
[1] A. Aizawa. An information-theoretic perspective of
tf-idf measures. Information Processing and
Management, 39:4565, January 2003.
[2] G. Amati and C. J. Rijsbergen. Term frequency
normalization via Pareto distributions. In 24th
BCS-IRSG European Colloquium on IR Research,
Glasgow, Scotland, 2002.
[3] R. K. Belew. Finding out about. Cambridge University
Press, 2000.
[4] A. Bookstein and D. Swanson. Probabilistic models
for automatic indexing. Journal of the American
Society for Information Science, 25:312318, 1974.
[5] I. N. Bronstein. Taschenbuch der Mathematik. Harri
Deutsch, Thun, Frankfurt am Main, 1987.
[6] K. Church and W. Gale. Poisson mixtures. Natural
Language Engineering, 1(2):163190, 1995.
[7] K. W. Church and W. A. Gale. Inverse document
frequency: A measure of deviations from poisson. In
Third Workshop on Very Large Corpora, ACL
Anthology, 1995.
[8] T. Lafouge and C. Michel. Links between information
construction and information gain: Entropy and
bibliometric distribution. Journal of Information
Science, 27(1):3949, 2001.
[9] E. Margulis. N-poisson document modelling. In
Proceedings of the 15th Annual International ACM
SIGIR Conference on Research and Development in
Information Retrieval, pages 177189, 1992.
[10] S. E. Robertson and S. Walker. Some simple effective
approximations to the 2-poisson model for
probabilistic weighted retrieval. In Proceedings of the
17th Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval,
pages 232241, London, et al., 1994. Springer-Verlag.
[11] S. Wong and Y. Yao. An information-theoric measure
of term specificity. Journal of the American Society
for Information Science, 43(1):5461, 1992.
[12] S. Wong and Y. Yao. On modeling information
retrieval with probabilistic inference. ACM
Transactions on Information Systems, 13(1):3868,
1995.
234
| inverse document frequency (idf);independent and disjoint documents;computer science;information search;probability theories;Poisson based probability;Term frequency;probabilistic retrieval models;Probability of being informative;Independent documents;Disjoint documents;Normalisation;relevance-based ranking of retrieved objects;information theory;Noise probability;frequency-based term noise probability;Poisson-based probability of being informative;Assumptions;Collection space;Poisson distribution;Probabilistic information retrieval;Document space;document retrieval;entropy;Frequency-based probability;Document frequency;Inverse document frequency;Information theory;independence assumption;inverse document frequency;maximal informative signal |
100 | High Performance Crawling System | In the present paper, we will describe the design and implementation of a real-time distributed system of Web crawling running on a cluster of machines. The system crawls several thousands of pages every second, includes a high-performance fault manager, is platform independent and is able to adapt transparently to a wide range of configurations without incurring additional hardware expenditure. We will then provide details of the system architecture and describe the technical choices for very high performance crawling. Finally, we will discuss the experimental results obtained, comparing them with other documented systems. | INTRODUCTION
With the World Wide Web containing the vast amount
of information (several thousands in 1993, 3 billion today)
that it does and the fact that it is ever expanding, we
need a way to find the right information (multimedia of
textual).
We need a way to access the information on
specific subjects that we require.
To solve the problems
above several programs and algorithms were designed that
index the web, these various designs are known as search
engines, spiders, crawlers, worms or knowledge robots graph
in its simplest terms. The pages are the nodes on the graph
and the links are the arcs on the graph. What makes this so
difficult is the vast amount of data that we have to handle,
and then we must also take into account the fact that the
World Wide Web is constantly growing and the fact that
people are constantly updating the content of their web
pages.
Any High performance crawling system should offer at
least the following two features.
Firstly, it needs to
be equipped with an intelligent navigation strategy, i.e.
enabling it to make decisions regarding the choice of subsequent
actions to be taken (pages to be downloaded etc).
Secondly, its supporting hardware and software architecture
should be optimized to crawl large quantities of documents
per unit of time (generally per second). To this we may add
fault tolerance (machine crash, network failure etc.) and
considerations of Web server resources.
Recently we have seen a small interest in these two
field. Studies on the first point include crawling strategies
for important pages [9, 17], topic-specific document downloading
[5, 6, 18, 10], page recrawling to optimize overall
refresh frequency of a Web archive [8, 7] or scheduling the
downloading activity according to time [22]. However, little
research has been devoted to the second point, being very
difficult to implement [20, 13]. We will focus on this latter
point in the rest of this paper.
Indeed, only a few crawlers are equipped with an optimized
scalable crawling system, yet details of their internal
workings often remain obscure (the majority being proprietary
solutions).
The only system to have been given a
fairly in-depth description in existing literature is Mercator
by Heydon and Najork of DEC/Compaq [13] used in the
AltaVista search engine (some details also exist on the first
version of the Google [3] and Internet Archive [4] robots).
Most recent studies on crawling strategy fail to deal with
these features, contenting themselves with the solution of
minor issues such as the calculation of the number of pages
to be downloaded in order to maximize/minimize some
functional objective. This may be acceptable in the case
of small applications, but for real time
1
applications the
system must deal with a much larger number of constraints.
We should also point out that little academic research
is concerned with high performance search engines, as
compared with their commercial counterparts (with the
exception of the WebBase project [14] at Stanford).
In the present paper, we will describe a very high
availability, optimized and distributed crawling system.
We will use the system on what is known as breadth-first
crawling, though this may be easily adapted to other
navigation strategies. We will first focus on input/output,
on management of network traffic and robustness when
changing scale. We will also discuss download policies in
1
"Soft" real time
299
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear
this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
MIR'04, October 1516, 2004, New York, New York, USA.
Copyright 2004 ACM 1-58113-940-3/04/0010...$5.00.
terms of speed regulation, fault management by supervisors
and the introduction/suppression of machine nodes without
system restart during a crawl.
Our system was designed within the experimental framework
of the D
ep^
ot L
egal du Web Fran
cais (French Web
Legal Deposit). This consists of archiving only multimedia
documents in French available on line, indexing them and
providing ways for these archives to be consulted. Legal
deposit requires a real crawling strategy in order to ensure
site continuity over time.
The notion of registration is
closely linked to that of archiving, which requires a suitable
strategy to be useful. In the course of our discussion, we
will therefore analyze the implication and impact of this
experimentation for system construction.
STATE OF THE ART
In order to set our work in this field in context, listed
below are definitions of services that should be considered
the minimum requirements for any large-scale crawling
system.
Flexibility: as mentioned above, with some minor
adjustments our system should be suitable for various
scenarios. However, it is important to remember that
crawling is established within a specific framework:
namely, Web legal deposit.
High Performance: the system needs to be scalable
with a minimum of one thousand pages/second and
extending up to millions of pages for each run on
low cost hardware. Note that here, the quality and
efficiency of disk access are crucial to maintaining high
performance.
Fault Tolerance: this may cover various aspects. As
the system interacts with several servers at once,
specific problems emerge.
First, it should at least
be able to process invalid HTML code, deal with
unexpected Web server behavior, and select good
communication protocols etc. The goal here is to avoid
this type of problem and, by force of circumstance, to
be able to ignore such problems completely. Second,
crawling processes may take days or weeks, and it is
imperative that the system can handle failure, stopped
processes or interruptions in network services, keeping
data loss to a minimum. Finally, the system should
be persistent, which means periodically switching large
data structures from memory to the disk (e.g. restart
after failure).
Maintainability and Configurability: an appropriate
interface is necessary for monitoring the crawling
process, including download speed, statistics on the
pages and amounts of data stored. In online mode, the
administrator may adjust the speed of a given crawler,
add or delete processes, stop the system, add or delete
system nodes and supply the black list of domains not
to be visited, etc.
2.2
General Crawling Strategies
There are many highly accomplished techniques in terms
of Web crawling strategy. We will describe the most relevant
of these here.
Breadth-first Crawling: in order to build a wide Web
archive like that of the Internet Archive [15], a crawl
is carried out from a set of Web pages (initial URLs
or seeds).
A breadth-first exploration is launched
by following hypertext links leading to those pages
directly connected with this initial set. In fact, Web
sites are not really browsed breadth-first and various
restrictions may apply, e.g. limiting crawling processes
to within a site, or downloading the pages deemed
most interesting first
2
Repetitive Crawling: once pages have been crawled,
some systems require the process to be repeated
periodically so that indexes are kept updated. In the
most basic case, this may be achieved by launching
a second crawl in parallel.
A variety of heuristics
exist to overcome this problem:
for example, by
frequently relaunching the crawling process of pages,
sites or domains considered important to the detriment
of others.
A good crawling strategy is crucial for
maintaining a constantly updated index list. Recent
studies by Cho and Garcia-Molina [8, 7] have focused
on optimizing the update frequency of crawls by using
the history of changes recorded on each site.
Targeted Crawling: more specialized search engines
use crawling process heuristics in order to target a
certain type of page, e.g. pages on a specific topic or
in a particular language, images, mp3 files or scientific
papers. In addition to these heuristics, more generic
approaches have been suggested. They are based on
the analysis of the structures of hypertext links [6,
5] and techniques of learning [9, 18]: the objective
here being to retrieve the greatest number of pages
relating to a particular subject by using the minimum
bandwidth. Most of the studies cited in this category
do not use high performance crawlers, yet succeed in
producing acceptable results.
Random Walks and Sampling: some studies have
focused on the effect of random walks on Web graphs
or modified versions of these graphs via sampling in
order to estimate the size of documents on line [1, 12,
11].
Deep Web Crawling: a lot of data accessible via
the Web are currently contained in databases and
may only be downloaded through the medium of
appropriate requests or forms. Recently, this often-neglected
but fascinating problem has been the focus
of new interest. The Deep Web is the name given to
the Web containing this category of data [9].
Lastly, we should point out the acknowledged differences
that exist between these scenarios. For example,
a breadth-first search needs to keep track of all pages
already crawled.
An analysis of links should use
structures of additional data to represent the graph
of the sites in question, and a system of classifiers in
order to assess the pages' relevancy [6, 5]. However,
some tasks are common to all scenarios, such as
2
See [9] for the heuristics that tend to find the most
important pages first and [17] for experimental results
proving that breadth-first crawling allows the swift retrieval
of pages with a high PageRank.
300
respecting robot exclusion files (robots.txt), crawling
speed, resolution of domain names . . .
In the early 1990s, several companies claimed that their
search engines were able to provide complete Web coverage.
It is now clear that only partial coverage is possible at
present.
Lawrence and Giles [16] carried out two experiments
in order to measure coverage performance of data
established by crawlers and of their updates. They adopted
an approach known as overlap analysis to estimate the size
of the Web that may be indexed (See also Bharat and Broder
1998 on the same subject). Let W be the total set of Web
pages and W
a
W and W
b
W the pages downloaded
by two different crawlers a and b. What is the size of W
a
and W
b
as compared with W ? Let us assume that uniform
samples of Web pages may be taken and their membership of
both sets tested. Let P (W
a
) and P (W
b
) be the probability
that a page is downloaded by a or b respectively. We know
that:
P (W
a
W
b
|W
b
) = W
a
W
b
|W
b
|
(1)
Now, if these two crawling processes are assumed to be
independent, the left side of equation 1may be reduced to
P (W
a
), that is data coverage by crawler a. This may be
easily obtained by the intersection size of the two crawling
processes. However, an exact calculation of this quantity
is only possible if we do not really know the documents
crawled. Lawrence and Giles used a set of controlled data of
575 requests to provide page samples and count the number
of times that the two crawlers retrieved the same pages. By
taking the hypothesis that the result P (W
a
) is correct, we
may estimate the size of the Web as
|W
a
|/P (W
a
).
This
approach has shown that the Web contained at least 320
million pages in 1997 and that only 60% was covered by the
six major search engines of that time. It is also interesting
to note that a single search engine would have covered only
1/3 of the Web. As this approach is based on observation, it
may reflect a visible Web estimation, excluding for instance
pages behind forms, databases etc. More recent experiments
assert that the Web contains several billion pages.
2.2.1
Selective Crawling
As demonstrated above, a single crawler cannot archive
the whole Web. The fact is that the time required to carry
out the complete crawling process is very long, and impossible
given the technology currently available. Furthermore,
crawling and indexing very large amounts of data implies
great problems of scalability, and consequently entails not
inconsiderable costs of hardware and maintenance.
For
maximum optimization, a crawling system should be able
to recognize relevant sites and pages, and restrict itself to
downloading within a limited time.
A document or Web page's relevancy may be officially
recognized in various ways. The idea of selective crawling
may be introduced intuitively by associating each URL u
with a score calculation function s
()
respecting relevancy
criterion and parameters . In the most basic case, we
may assume a Boolean relevancy function, i.e. s(u) = 1 if
the document designated by u is relevant and s(u) = 0 if not.
More generally, we may think of s(d) as a function with real
values, such as a conditional probability that a document
belongs to a certain category according to its content. In all
cases, we should point out that the score calculation function
depends only on the URL and and not on the time or state
of the crawler.
A general approach for the construction of a selective
crawler consists of changing the URL insertion and extraction
policy in the queue Q of the crawler. Let us assume
that the URLs are sorted in the order corresponding to the
value retrieved by s(u). In this case, we obtain the best-first
strategy (see [19]) which consists of downloading URLs
with the best scores first). If s(u) provides a good relevancy
model, we may hope that the search process will be guided
towards the best areas of the Web.
Various studies have been carried out in this direction: for
example, limiting the search depth in a site by specifying
that pages are no longer relevant after a certain depth. This
amounts to the following equation:
s
(depth)
(u) =
1, if
|root(u) u| <
0, else
(2)
where root(u) is the root of the site containing u.
The
interest of this approach lies in the fact that maximizing
the search breadth may make it easier for the end-user to
retrieve the information. Nevertheless, pages that are too
deep may be accessed by the user, even if the robot fails to
take them into account.
A second possibility is the estimation of a page's popularity
. One method of calculating a document's relevancy
would relate to the number of backlinks.
s
(backlinks)
(u) =
1, if indegree(u) >
0, else
(3)
where is a threshold.
It is clear that s
(backlinks)
(u) may only be calculated if
we have a complete site graph (site already downloaded
beforehand).
In practice, we make take an approximate
value and update it incrementally during the crawling
process. A derivative of this technique is used in Google's
famous PageRank calculation.
OUR APPROACH THE DOMINOS SYSTEM
As mentioned above, we have divided the system into two
parts: workers and supervisors. All of these processes may
be run on various operating systems (Windows, MacOS X,
Linux, FreeBSD) and may be replicated if need be. The
workers are responsible for processing the URL flow coming
from their supervisors and for executing crawling process
tasks in the strict sense. They also handle the resolution of
domain names by means of their integrated DNS resolver,
and adjust download speed in accordance with node policy.
A worker is a light process in the Erlang sense, acting as
a fault tolerant and highly available HTTP client.
The
process-handling mode in Erlang makes it possible to create
several thousands of workers in parallel.
In our system, communication takes place mainly by sending
asynchronous messages as described in the specifications
for Erlang language. The type of message varies according to
need: character string for short messages and binary format
for long messages (large data structures or files). Disk access
is reduced to a minimum as far as possible and structures
are stored in the real-time Mnesia
3
database that forms
3
http://www.erlang.org/doc/r9c/lib/mnesia-4
.1.4/doc/html/
301
a standard part of the Erlang development kit. Mnesia's
features give it a high level of homogeneity during the base's
access, replication and deployment. It is supported by two
table management modules ETS and DETS. ETS allows
tables of values to be managed by random access memory,
while DETS provides a persistent form of management on
the disk. Mnesia's distribution faculty provides an efficient
access solution for distributed data. When a worker moves
from one node to another (code migration), it no longer need
be concerned with the location of the base or data. It simply
has to read and write the information transparently.
1
loop(InternalState) ->
% Supervisor main
2
% loop
3
receive {From,{migrate,Worker,Src,Dest}} ->
4
% Migrate the Worker process from
5
% Src node to Dest node
6
spawn(supervisor,migrate,
7
[Worker,Src,Dest]),
8
% Infinite loop
9
loop(InternalState);
10
11
{From,{replace,OldPid,NewPid,State}} ->
12
% Add the new worker to
13
% the supervisor state storage
14
NewInternalState =
15
replace(OldPid,NewPid,InternalState),
16
% Infinite loop
17
loop(NewInternalState);
18
...
19
end.
20
21
migrate(Pid,Src,Dest) ->
% Migration
22
% process
23
receive
24
Pid ! {self(), stop},
25
receive
26
{Pid,{stopped,LastState}} ->
27
NewPid = spawn{Dest,worker,proc,
28
[LastState]},
29
self() ! {self(), {replace,Pid,
30
NewPid,LastState}};
31
{Pid,Error} -> ...
32
end.
Listing 1: Process Migration
Code 1describes the migration of a worker process from
one node Src to another Dest.
4
The supervisor receives
the migration order for process P id (line 4). The migration
action is not blocking and is performed in a different Erlang
process (line 7). The supervisor stops the worker with the
identifier P id (line 25) and awaits the operation result (line
26). It then creates a remote worker in the node Dest with
the latest state of the stopped worker (line 28) and updates
its internal state (lines 30 and 12).
3.1
Dominos Process
The Dominos system is different from all the other crawling
systems cited above. Like these, the Dominos offering is
on distributed architecture, but with the difference of being
totally dynamic. The system's dynamic nature allows its
architecture to be changed as required. If, for instance, one
of the cluster's nodes requires particular maintenance, all of
the processes on it will migrate from this node to another.
When servicing is over, the processes revert automatically
4
The character % indicates the beginning of a comment in
Erlang.
to their original node. Crawl processes may change pool
so as to reinforce one another if necessary. The addition or
deletion of a node in the cluster is completely transparent in
its execution. Indeed, each new node is created containing a
completely blank system. The first action to be undertaken
is to search for the generic server in order to obtain the
parameters of the part of the system that it is to belong
to. These parameters correspond to a limited view of the
whole system. This enables Dominos to be deployed more
easily, the number of messages exchanged between processes
to be reduced and allows better management of exceptions.
Once the generic server has been identified, binaries are sent
to it and its identity is communicated to the other nodes
concerned.
Dominos Generic Server (GenServer): Erlang process
responsible for managing the process identifiers on the
whole cluster. To ensure easy deployment of Dominos,
it was essential to mask the denominations of the
process identifiers. Otherwise, a minor change in the
names of machines or their IP would have required
complete reorganization of the system.
GenServer
stores globally the identifiers of all processes existing
at a given time.
Dominos RPC Concurrent (cRPC): as its name suggests
, this process is responsible for delegating the
execution of certain remote functions to other processes
. Unlike conventional RPCs where it is necessary
to know the node and the object providing these
functions (services), our RPCC completely masks the
information.
One need only call the function, with
no concern for where it is located in the cluster or
for the name of the process offering this function.
Moreover, each RPCC process is concurrent, and
therefore manages all its service requests in parallel.
The results of remote functions are governed by two
modes: blocking or non-blocking. The calling process
may therefore await the reply of the remote function
or continue its execution.
In the latter case, the
reply is sent to its mailbox. For example, no worker
knows the process identifier of its own supervisor. In
order to identify it, a worker sends a message to the
process called supervisor. The RPCC deals with the
message and searches the whole cluster for a supervisor
process identifier, starting with the local node.
The address is therefore resolved without additional
network overhead, except where the supervisor does
not exist locally.
Dominos Distributed Database (DDB): Erlang process
responsible for Mnesia real-time database management
. It handles the updating of crawled information,
crawling progress and the assignment of URLs to be
downloaded to workers.
It is also responsible for
replicating the base onto the nodes concerned and for
the persistency of data on disk.
Dominos Nodes: a node is the physical representation
of a machine connected (or disconnected as the case
may be) to the cluster. This connection is considered
in the most basic sense of the term, namely a simple
plugging-in (or unplugging) of the network outlet.
Each node clearly reflects the dynamic character of
the Dominos system.
302
Dominos Group Manager: Erlang process responsible
for controlling the smooth running of its child processes
(supervisor and workers).
Dominos Master-Supervisor Processes: each group
manager has a single master process dealing with the
management of crawling states of progress. It therefore
controls all the slave processes (workers) contained
within it.
Dominos Slave-Worker Processes: workers are the
lowest-level elements in the crawling process.
This
is the very heart of the Web client wrapping the
libCURL.
With Dominos architecture being completely dynamic and
distributed, we may however note the hierarchical character
of processes within a Dominos node. This is the only way to
ensure very high fault tolerance. A group manager that fails
is regenerated by the node on which it depends. A master
process (supervisor) that fails is regenerated by its group
manager. Finally, a worker is regenerated by its supervisor.
As for the node itself, it is controlled by the Dominos kernel
(generally on another remote machine). The following code
describes the regeneration of a worker process in case of
failure.
1
% Activate error handling
2
process_flag(trap_exit,
true
),
3
...
4
loop(InternalState) ->
% Supervisor main loop
5
receive
6
{From,{job,
Name
,finish}, State} ->
7
% Informe the GenServer that the download is ok
8
?ServerGen ! {job,
Name
,finish},
9
10
% Save the new worker state
11
NewInternalState=save_state(From,State,InternalState),
12
13
% Infinite loop
14
loop(NewInternalState);
15
...
16
{From,Error} ->
% Worker crash
17
% Get the last operational state before the crash
18
WorkerState = last_state(From,InternalState),
19
20
% Free all allocated resources
21
free_resources(From,InternalState),
22
23
% Create a new worker with the last operational
24
% state of the crashed worker
25
Pid = spawn(worker,proc,[WorkerState]),
26
27
% Add the new worker to the supervisor state
28
% storage
29
NewInternalState =replace(From,Pid,InternalState),
30
31
% Infinite loop
32
loop(NewInternalState);
33
end.
Listing 2: Regeneration of a Worker Process in Case
of Failure
This represents the part of the main loop of the supervisor
process dealing with the management of the failure of a
worker.
As soon as a worker error is received (line 19),
the supervisor retrieves the last operational state of the
worker that has stopped (line 22), releases all of its allocated
resources (line 26) and recreates a new worker process with
the operational state of the stopped process (line 31). The
supervisor continually turns in loop while awaiting new
messages (line 40). The loop function call (lines 17 and 40)
is tail recursive, thereby guaranteeing that the supervision
process will grow in a constant memory space.
3.2
DNS Resolution
Before contacting a Web server, the worker process
needs to convert the Domain Name Server (DNS) into
a valid IP address.
Whereas other systems (Mercator,
Internet Archive) are forced to set up DNS resolvers each
time a new link is identified, this is not necessary with
Dominos.
Indeed, in the framework of French Web legal
deposit, the sites to be archived have been identified
beforehand, thus requiring only one DNS resolution
per domain name. This considerably increases crawl
speed.
The sites concerned include all online newspapers
, such as LeMonde (http://www.lemonde.fr/ ), LeFigaro
(http://www.lefigaro.fr/ ) . . . , online television/radio such as
TF1(http://www.tf1.fr/ ), M6 (http://www.m6.fr/ ) . . .
DETAILS OF IMPLEMENTATION
The workers are the medium responsible for physically
crawling on-line contents.
They provide a specialized
wrapper around the libCURL
5
library that represents the
heart of the HTTP client.
Each worker is interfaced to
libCURL by a C driver (shared library). As the system seeks
maximum network accessibility (communication protocol
support), libCURL appeared to be the most judicious choice
when compared with other available libraries.
6
.
The protocols supported include: FTP, FTPS, HTTP,
HTTPS, LDAP, Certifications, Proxies, Tunneling etc.
Erlang's portability was a further factor favoring the
choice of libCURL. Indeed, libCURL is available for various
architectures:
Solaris, BSD, Linux, HPUX, IRIX, AIX,
Windows, Mac OS X, OpenVMS etc. Furthermore, it is
fast, thread-safe and IPv6 compatible.
This choice also opens up a wide variety of functions.
Redirections are accounted for and powerful filtering is
possible according to the type of content downloaded,
headers, and size (partial storage on RAM or disk depending
on the document's size).
4.2
Document Fingerprint
For each download, the worker extracts the hypertext
links included in the HTML documents and initiates a fingerprint
(signature operation). A fast fingerprint (HAVAL
on 256 bits) is calculated for the document's content itself
so as to differentiate those with similar contents (e.g. mirror
sites). This technique is not new and has already been used
in Mercator[13]. It allows redundancies to be eliminated in
the archive.
4.3
URL Extraction and Normalization
Unlike other systems that use libraries of regular expressions
such as PCRE
7
for URL extraction, we have opted
5
Available at http://curl.haxx.se/libcurl/
6
See http://curl.haxx.se/libcurl/competitors.html
7
Available at http://www.pcre.org/
303
for the Flex tool that definitely generates a faster parser.
Flex was compiled using a 256Kb buffer in which all table
compression options were activated during parsing "-8 -f Cf
-Ca -Cr -i". Our current parser analyzes around 3,000
pages/second for a single worker for an average 49Kb per
page.
According to [20], a URL extraction speed of 300 pages/second
may generate a list of more than 2,000 URLs on average.
A naive representation of structures in the memory may
soon saturate the system.
Various solutions have been proposed to alleviate this
problem.
The Internet Archive [4] crawler uses Bloom
filters in random access memory. This makes it possible
to have a compact representation of links retrieved, but also
generates errors (false-positive), i.e. certain pages are never
downloaded as they create collisions with other pages in the
Bloom filter. Compression without loss may reduce the size
of URLs to below 10Kb [2, 21], but this remains insufficient
in the case of large-scale crawls. A more ingenious approach
is to use persistent structures on disk coupled with a cache
as in Mercator [13].
4.4
URL Caching
In order to speed up processing, we have developed a
scalable cache structure for the research and storage of URLs
already archived. Figure 1describes how such a cache works:
Links
Local Cache - Worker
Rejected
Links
0 1 2
255
JudyL-Array
URL CRC
URL
#URL
key
value
JudySL-Array
Figure 1: Scalable Cache
The cache is available at the level of each worker.
It
acts as a filter on URLs found and blocks those already
encountered.
The cache needs to be scalable to be able
to deal with increasing loads. Rapid implementation using
a non-reversible hash function such as HAVAL, TIGER,
SHA1 , GOST, MD5, RIPEMD . . . would be fatal to the
system's scalability. Although these functions ensure some
degree of uniqueness in fingerprint constructionthey are too
slow to be acceptable in these constructions. We cannot
allow latency as far as lookup or URL insertion in the cache
is concerned, if the cache is apt to exceed a certain size (over
10
7
key-value on average). This is why we have focused on
the construction of a generic cache that allows key-value
insertion and lookup in a scalable manner.
The Judy-Array
API
8
enabled us to achieve this objective. Without
going into detail about Judy-Array (see their site for more
information), our cache is a coherent coupling between
a JudyL-Array and N JudySL-Array.
The JudyL-Array
represents a hash table of N = 2
8
or N = 2
16
buckets able to
fit into the internal cache of the CPU. It is used to store "key-numeric
value" pairs where the key represents a CRC of the
8
Judy Array at the address: http://judy.sourceforge.net/
URL and whose value is a pointer to a JudySL-Array. The
second, JudySL-Array, is a "key-compressed character string
value" type of hash, in which the key represents the URL
identifier and whose value is the number of times that the
URL has been viewed. This cache construction is completely
scalable and makes it possible to have sub-linear response
rates, or linear in the worst-case scenario (see Judy-Array at
for an in-depth analysis of their performance). In the section
on experimentation (section 5) we will see the results of this
type of construction.
4.5
Limiting Disk Access
Our aim here is to eliminate random disk access completely
.
One simple idea used in [20] is periodically to
switch structures requiring much memory over onto disk.
For example, random access memory can be used to keep
only those URLs found most recently or most frequently,
in order to speed up comparisons.
This requires no
additional development and is what we have decided to
use. The persistency of data on disk depends on the size
of data in DS memory, and their DA age.
The data
in the memory are distributed transparently via Mnesia,
specially designed for this kind of situation. Data may be
duplicated (
{ram copies, [Nodes]}, {disc copies, [Nodes]})
or fragmented (
{frag properties, .....}) on the nodes in
question.
According to [20], there are on average 8 non-duplicated
hypertext links per page downloaded.
This means that
the number of pages retrieved and not yet archived is
considerably increased.
After archiving 20 million pages,
over 100 million URLs would still be waiting.
This has
various repercussions, as newly-discovered URLs will be
crawled only several days, or even weeks, later. Given this
speed, the base's data refresh ability is directly affected.
4.6
High Availability
In order to apprehend the very notion of High Availability,
we first need to tackle the differences that exist between
a system's reliability and its availability.
Reliability is
an attribute that makes it possible to measure service
continuity when no failure occurs.
Manufacturers generally provide a statistical estimation of
this value for this equipment: we may use the term MTBF
(Mean Time Between Failure). A strong MTBF provides a
valuable indication of a component's ability to avoid overly
frequent failure.
In the case of a complex system (that can be broken
down into hardware or software parts), we talk about MTTF
(Mean Time To Failure).
This denotes the average time
elapsed until service stops as the result of failure in a
component or software.
The attribute of availability is more difficult to calculate
as it includes a system's ability to react correctly in case of
failure in order to restart service as quickly as possible.
It is therefore necessary to quantify the time interval during
which service is unavailable before being re-established:
the acronym MTTR (Mean Time To Repair) is used to
represent this value.
The formula used to calculate the rate of a system's
availability is as follows:
availability =
M T T F
M T T F + M T T R
(4)
304
A system that looks to have a high level of availability should
have either a strong MTTF, or a weak MTTR.
Another more practical approach consists in measuring
the time period during which service is down in order to
evaluate the level of availability. This is the method most
frequently adopted, even if it fails to take account of the
frequency of failure, focusing rather on its duration.
Calculation is usually based on a calendar year.
The
higher the percentage of service availability, the nearer it
comes to High Availability.
It is fairly easy to qualify the level of High Availability of a
service from the cumulated downtime, by using the normalized
principle of "9's" (below 3 nine, we are no longer talking
about High Availability, but merely availability). In order
to provide an estimation of Dominos' High Availability, we
carried out performance tests by fault injection. It is clear
that a more accurate way of measuring this criterion would
be to let the system run for a whole year as explained above.
However, time constraints led us to adopt this solution. Our
injector consists in placing pieces of false code in each part
of the system and then measuring the time required for the
system to make the service available. Once again, Erlang has
proved to be an excellent choice for the setting up of these
regression tests. The table below shows the average time
required by Dominos to respond to these cases of service
unavailability.
Table 1clearly shows Dominos' High Availability.
We
Service
Error
MTTR (microsec)
GenServer
10
3
bad match
320
cRPC
10
3
bad match
70
DDB
10
7
tuples
9
10
6
Node
10
3
bad match
250
Supervisor
10
3
bad match
60
Worker
10
3
bad match
115
Table 1: MTTR Dominos
see that for 10
3
matches of error, the system resumes
service virtually instantaneously.
The DB was tested on
10
7
tuples in random access memory and resumed service
after approximately 9 seconds.
This corresponds to an
excellent MTTR, given that the injections were made on
a PIII-966Mhz with 512Mb of RAM. From these results, we
may label our system as being High Availability, as opposed
to other architectures that consider High Availability only
in the sense of failure not affecting other components of
the system, but in which service restart of a component
unfortunately requires manual intervention every time.
EXPERIMENTATION
This section describes Dominos' experimental results tested
on 5 DELL machines:
nico: Intel Pentium 4 - 1.6 Ghz, 256 Mb RAM. Crawl
node (supervisor, workers). Activates a local cRPC.
zico: Intel Pentium 4 - 1.6 Ghz, 256 Mb RAM. Crawl
node (supervisor, workers). Activates a local cRPC.
chopin: Intel Pentium 3 - 966 Mhz, 512 Mb RAM.
Main node loaded on ServerGen and DB. Also handles
crawling (supervisor, workers).
Activates a local
cRPC.
gao: Intel Pentium 3 - 500 Mhz, 256 Mb RAM. Node
for DB fragmentation. Activates a local cRPC.
margo: Intel Pentium 2 - 333 Mhz, 256 Mb RAM.
Node for DB fragmentation. Activates a local cRPC.
Machines chopin, gao and margo are not dedicated solely
to crawling and are used as everyday workstations. Disk
size is not taken into account as no data were actually
stored during these tests.
Everything was therefore carried
out using random access memory with a network of
100 Mb/second.
Dominos performed 25,116,487 HTTP
requests after 9 hours of crawling with an average of
816 documents/second for 49Kb per document.
Three
nodes (nico, zico and chopin) were used in crawling, each
having 400 workers.
We restricted ourselves to a total
of 1,200 workers, due to problems generated by Dominos
at intranet level.
The firewall set up to filter access
is considerably detrimental to performance because of its
inability to keep up with the load imposed by Dominos.
Third-party tests have shown that peaks of only 4,000
HTTP requests/second cause the immediate collapse of the
firewall. The firewall is not the only limiting factor, as the
same tests have shown the incapacity of Web servers such
as Apache2, Caudium or Jigsaw to withstand such loads
(see http://www.sics.se/
joe/apachevsyaws.html). Figure 2
(left part) shows the average URL extraction per document
crawled using a single worker. The abscissa (x) axis represents
the number of documents treated, and the ordered
(y) axis gives the time in microseconds corresponding to
extraction.
In the right-hand figure, the abscissa axis
represents the same quantity, though this time in terms
of data volume (Mb). We can see a high level of parsing
reaching an average of 3,000 pages/second at a speed of
70Mb/second. In Figure 3 we see that URL normalization
0
500000
1e+06
1.5e+06
2e+06
2.5e+06
3e+06
3.5e+06
0
2000
4000
6000
8000
10000
Time (microsec)
Documents
Average number of parsed documents
PD
0
500000
1e+06
1.5e+06
2e+06
2.5e+06
3e+06
3.5e+06
0
20
40
60
80
100
120
140
160
Time (microsec)
Document Size (Mb)
Average size of parsed documents
PDS
Figure 2: Link Extraction
is as efficient as extraction in terms of speed. The abscissa
axis at the top (and respectively at the bottom) represents
the number of documents processed per normalization phase
(respectively the quantity of documents in terms of volume).
Each worker normalizes on average 1,000 documents/second
, which is equivalent to 37,000 URLs/second at a speed
of 40Mb/second. Finally, the URL cache structure ensures
a high degree of scalability (Figure 3). The abscissa axis in
this figure represents the number of key-values inserted or
retrieved. The cache is very close to a step function due to
key compression in the Judy-Array. Following an increase in
insertion/retrieval time in the cache, it appears to plateau
by 100,000 key-value bands. We should however point out
that URL extraction and normalization also makes use of
this type of cache so as to avoid processing a URL already
encountered.
305
0
10000
20000
30000
40000
50000
60000
0
2000
4000
6000
8000
10000
Time (microsec)
Normalized documents
Average number of normalized documents
AD
0
10000
20000
30000
40000
50000
60000
0
2000 4000 6000 8000 10000 12000 14000 16000
Time (microsec)
Urls
Average number of normalized Url
AU
0
10000
20000
30000
40000
50000
60000
0
20
40
60
80
100
120
140
160
Time (microsec)
Document Size (Mb)
Average size of normalized documents
ADS
0
50000
100000
150000
200000
250000
300000
350000
0
20000
40000
60000
80000
100000
Time (microsec)
Key-Value
Scalable Cache : Insertion vs Retrieval
Cache Insertion
Cache Retrieval
Figure 3:
URL Normalization and Cache Performance
CONCLUSION
In the present paper, we have introduced a high availability
system of crawling called Dominos.
This system
has been created in the framework of experimentation
for French Web legal deposit carried out at the Institut
National de l'Audiovisuel (INA). Dominos is a dynamic
system, whereby the processes making up its kernel are
mobile.
90% of this system was developed using Erlang
programming language, which accounts for its highly flexible
deployment, maintainability and enhanced fault tolerance.
Despite having different objectives, we have been able to
compare it with other documented Web crawling systems
(Mercator, InternetArchive . . . ) and have shown it to be
superior in terms of crawl speed, document parsing and
process management without system restart.
Dominos is more complex than its description here. We
have not touched upon archival storage and indexation.
We have preferred to concentrate rather on the detail of
implementation of the Dominos kernel itself, a strategic
component that is often overlooked by other systems (in particular
those that are proprietary, others being inefficient).
However, there is still room for the system's improvement.
At present, crawled archives are managed by NFS, a file
system that is moderately efficient for this type of problem.
Switchover to Lustre
9
, a distributed file system with a
radically higher level of performance, is underway.
REFERENCES
[1] Z. BarYossef, A. Berg, S. Chien, J. Fakcharoenphol,
and D. Weitz. Approximating aggregate queries about
web pages via random walks. In Proc. of 26th Int.
Conf. on Very Large Data Bases, 2000.
[2] K. Bharat, A. Broder, M. Henzinger, P. Kumar, and
S. Venkatasubramanian. The connectivity server: Fast
access to linkage. Information on the Web, 1998.
[3] S. Brin and L. Page. The anatomy of a large-scale
hypertextual web search engine. In Proc. of the
Seventh World-Wide Web Conference, 1998.
[4] M. Burner. Crawling towards eternity: Building an
archive of the world wide web.
9
http://www.lustre.org/
http://www.webtechniques.com/archives/1997/05/burner/,
1997.
[5] S. Chakrabarti, M. V. D. Berg, and B. Dom.
Distributed hypertext resource discovery through
example. In Proc. of 25th Int. Conf. on Very Large
Data Base, pages 375386, 1997.
[6] S. Chakrabarti, M. V. D. Berg, and B. Dom. Focused
crawling: A new approach to topic-specific web
resource discovery. In Proc. of the 8th Int. World
Wide Web Conference, 1999.
[7] J. Cho and H. Garcia-Molina. The evolution of the
web and implications for an incremental crawler. In
Proc. of 26th Int. Conf. on Very Large Data Bases,
pages 117128, 2000.
[8] J. Cho and H. Garcia-Molina. Synchronizing a
database to improve freshness. In Proc. of the ACM
SIGMOD Int. Conf. on Management of Data, 2000.
[9] J. Cho, H. Garcia-Molina, and L. Page. Efficient
crawling through url ordering. In 7th Int. World Wide
Web Conference, 1998.
[10] M. Diligenti, F. Coetzee, S. Lawrence, C. Giles, and
M. Gori. Focused crawling using context graphs. In
Proc. of 26th Int. Conf. on Very Large Data Bases,
2000.
[11] M. Henzinger, A. Heydon, M. Mitzenmacher, and
M. Najork. Measuring index quality using random
walks on the web. In Proc. of the 8th Int. World Wide
Web Conference (WWW8), pages 213225, 1999.
[12] M. Henzinger, A. Heydon, M. Mitzenmacher, and
M. Najork. On near-uniform url sampling. In Proc. of
the 9th Int. World Wide Web Conference, 2000.
[13] A. Heydon and M. Najork. Mercator: A scalable,
extensible web crawler. World Wide Web Conference,
pages 219229, 1999.
[14] J. Hirai, S. Raghavan, H. Garcia-Molina, and
A. Paepcke. Webbase : : A repository of web pages. In
Proc. of the 9th Int. World Wide Web Conference,
2000.
[15] B. Kahle. Archiving the internet. Scientific American,
1997.
[16] S. Lawrence and C. L. Giles. Searching the world wide
web. Science 280, pages 98100, 1998.
[17] M. Najork and J. Wiener. Breadth-first search
crawling yields high-quality pages. In 10th Int. World
Wide Web Conference, 2001.
[18] J. Rennie and A. McCallum. Using reinforcement
learning to spider the web efficiently. In Proc. of the
Int. Conf. on Machine Learning, 1999.
[19] S. Russel and P. Norvig. Artificial Intelligence: A
modern Approach. Prentice Hall, 1995.
[20] V. Shkapenyuk and T. Suel. Design and
implementation of a high-performance distributed web
crawler. Polytechnic University: Brooklyn, Mars 2001.
[21] T. Suel and J. Yuan. Compressing the graph structure
of the web. In Proc. of the IEEE Data Compression
Conference, 2001.
[22] J. Talim, Z. Liu, P. Nain, and E. Coffman. Controlling
robots of web search engines. In SIGMETRICS
Conference, 2001.
306
| Breadth first crawling;Hierarchical Cooperation;limiting disk access;fault tolerance;Dominos nodes;dominos process;Dominos distributed database;breadth-first crawling;repetitive crawling;URL caching;Dominos Generic server;Document fingerprint;Deep web crawling;Dominos RPC concurrent;Random walks and sampling;Web Crawler;maintaiability and configurability;deep web crawling;High Availability System;real-time distributed system;crawling system;high performance crawling system;high availability;Erlang development kit;targeted crawling |
101 | Hiperlan/2 Public Access Interworking with 3G Cellular Systems | This paper presents a technical overview of the Hiperlan/2 3G interworking concept. It does not attempt to provide any business justification or plan for Public Access operation. After a brief resume of public access operation below, section 2 then introduces an overview of the technologies concerned. Section 3 describes the system approach and presents the current reference architecture used within the BRAN standardisation activity. Section 4 then goes on to cover in more detail the primary functions of the system such as authentication, mobility, quality of service (QoS) and subscription. It is worth noting that since the Japanese WLAN standard HiSWANa is very similar to Hiperlan/2, much of the technical information within this paper is directly applicable to this system, albeit with some minor changes to the authentication scheme. Additionally the high level 3G and external network interworking reference architecture is also applicable to IEEE 802.11. Finally, section 5 briefly introduces the standardisation relationships between ETSI BRAN, WIG, 3GPP, IETF, IEEE 802.11 and MMAC HSWA. | 1.1. Public access operation
Recently, mobile business professionals have been looking
for a more efficient way to access corporate information systems
and databases remotely through the Internet backbone.
However, the high bandwidth demand of the typical office applications
, such as large email attachment downloading, often
calls for very fast transmission capacity. Indeed certain hot
spots, like hotels, airports and railway stations are a natural
place to use such services. However, in these places the time
available for information download typically is fairly limited.
In light of this, there clearly is a need for a public wireless
access solution that could cover the demand for data intensive
applications and enable smooth on-line access to corporate
data services in hot spots and would allow a user to roam from
a private, micro cell network (e.g., a Hiperlan/2 Network) to a
wide area cellular network or more specifically a 3G network.
Together with high data rate cellular access, Hiperlan/2 has
the potential to fulfil end user demands in hot spot environments
. Hiperlan/2 offers a possibility for cellular operators
to offer additional capacity and higher bandwidths for end
users without sacrificing the capacity of the cellular users, as
Hiperlans operate on unlicensed or licensed exempt frequency
bands. Also, Hiperlan/2 has the QoS mechanisms that are capable
to meet the mechanisms that are available in the 3G
systems. Furthermore, interworking solutions enable operators
to utilise the existing cellular infrastructure investments
and well established roaming agreements for Hiperlan/2 network
subscriber management and billing.
Technology overview
This section briefly introduces the technologies that are addressed
within this paper.
2.1. Hiperlan/2 summary
Hiperlan/2 is intended to provide local wireless access to IP,
Ethernet, IEEE 1394, ATM and 3G infrastructure by both stationary
and moving terminals that interact with access points.
The intention is that access points are connected to an IP, Ethernet
, IEEE 1394, ATM or 3G backbone network. A number
of these access points are required to service all but the small-44
MCCANN AND FLYGARE
est networks of this kind, and therefore the wireless network
as a whole supports handovers of connections between access
points.
2.2. Similar WLAN interworking schemes
It should be noted that the interworking model presented in
this paper is also applicable to the other WLAN systems, i.e.
IEEE 802.11a/b and MMAC HiSWANa (High Speed Wireless
Access Network), albeit with some minor modifications
to the authentications schemes. It has been the intention of
BRAN to produce a model which not only fits the requirements
of Hiperlan/23G interworking, but also to try and
meet those of the sister WLAN systems operating in the same
market. A working agreement has been underway between
ETSI BRAN and MMAC HSWA for over 1 year, and with
the recent creation of WIG (see section 5), IEEE 802.11 is
also working on a similar model.
2.3. 3G summary
Within the framework of International Mobile Telecommunications
2000 (IMT-2000), defined by the International
Telecommunications Union (ITU), the 3rd Generation Partnership
Project (3GPP) are developing the Universal Mobile
Telecommunications System (UMTS) which is one of the major
third generation mobile systems. Additionally the 3rd
Generation Partnership Project 2 (3GPP2) is also developing
another 3G system, Code Division Multiple Access 2000
(CDMA-2000). Most of the work within BRAN has concentrated
on UMTS, although most of the architectural aspects
are equally applicable to Hiperlan/2 interworking with
CDMA-2000 and indeed pre-3G systems such as General
Packet Radio Services (GPRS).
The current working UMTS standard, Release 4, of UMTS
was finalised in December 2000 with ongoing development
work contributing to Release 5, due to be completed by the
end of 2002. A future release 6 is currently planned for the autumn
of 2003, with worldwide deployment expected by 2005.
System approach
This section describes the current interworking models being
worked upon within BRAN at the current time. The BRAN
Network Reference Architecture, shown in figure 1, identifies
the functions and interfaces that are required within a Hiperlan/2
network in order to support inter-operation with 3G systems
.
The focus of current work is the interface between the
Access Point (AP) and the Service provider network (SPN)
which is encapsulated by the Lx interface. The aim of the
Hiperlan/23G interworking work item is to standardise these
interfaces, initially focusing on AAA (Authentication, Authorisation
and Accounting) functionality.
A secondary aim is to create a model suitable for all the
5 GHz WLAN systems (e.g., Hiperlan/2, HiSWANa, IEEE
Figure 1. Reference architecture.
802.11a) and all 3G systems (e.g., CDMA-2000, UMTS),
thus creating a world wide standard for interworking as mentioned
in section 5.
Other interfaces between the AP and external networks and
interfaces within the AP are outside the scope of this current
work.
Figure 1 shows the reference architecture of the interworking
model. It presents logical entities within which the following
functions are supported:
Authentication: supports both SIM-based and SIM-less
authentication. The mobile terminal (MT) communicates
via the Attendant with an authentication server in the visited
network, for example a local AAA server, across the
Ls interface.
Authorisation and User Policy: the SPN retrieves authorisation
and user subscription information from the home
network when the user attaches to it. Authorisation information
is stored within a policy decision function in the
SPN. Interfaces used for this are Lp and Ls.
Accounting: the resources used by a MT and the QoS provided
to a user are both monitored by the Resource Monitor
. Accounting records are sent to accounting functions
in the visited network via the La interface.
Network Management: the Management Agent provides
basic network and performance monitoring, and allows the
configuration of the AP via the Lm interface.
Admission Control and QoS: a policy decision function in
the SPN decides whether a new session with a requested
QoS can be admitted based on network load and user subscription
information. The decision is passed to the Policy
Enforcement function via the Lp interface.
Inter-AP Context Transfer: the Handover Support function
allows the transfer of context information concerning
a user/mobile node, e.g., QoS state, across the Lh interface
from the old to the new AP between which the mobile is
handing over.
HIPERLAN/2 PUBLIC ACCESS INTERWORKING WITH 3G CELLULAR SYSTEMS
45
Mobility: mobility is a user plane function that performs
re-routing of data across the network. The re-routing may
simply be satisfied by layer 2 switching or may require
support for a mobility protocol such as Mobile IP depending
on the technology used within the SPN. Mobility is an
attribute of the Lr interface.
Location Services: the Location Server function provides
positioning information to support location services. Information
is passed to SPN location functions via the Ll
interface.
Primary functions
This section describes the primary functions of this model (refer
to figure 1) in further detail, specifically: authentication
and accounting, mobility and QoS.
4.1. Authentication and authorisation
A key element to the integration of disparate systems is the
ability of the SPN to extract both authentication and subscription
information from the mobile users' home networks when
an initial association is requested. Many users want to make
use of their existing data devices (e.g., Laptop, Palmtop) without
additional hardware/software requirements. Conversely
for both users and mobile operators it is beneficial to be able
to base the user authentication and accounting on existing cellular
accounts, as well as to be able to have Hiperlan/2-only
operators and users; in any case, for reasons of commonality
in MT and network (indeed SPN) development it is important
to be able to have a single set of AAA protocols which
supports all the cases.
4.1.1. Loose coupling
The rest of this paper concentrates on loose coupling solutions
. "Loose coupling", is generally defined as the utilisation
of Hiperlan/2 as a packet based access network complementary
to current 3G networks, utilising the 3G subscriber databases
but without any user plane Iu type interface, as shown
in figure 1. Within the UMTS context, this scheme avoids
any impact on the SGSN and GGSN nodes. Security, mobility
and QoS issues are addressed using Internet Engineering
Task Force (IETF) schemes.
Other schemes which essentially replace the User Terminal
Radio Access Network (UTRAN) of UMTS with a HIRAN
(Hiperlan Radio Access Network) are referred to as "Tight
Coupling", but are not currently being considered within the
work of BRAN.
4.1.2. Authentication flavours
This section describes the principle functions of the loose
coupling interworking system and explains the different authentication
flavours that are under investigation. The focus
of current work is the interface between the AP and the SPN.
Other interfaces between the AP and external networks and
interfaces within the AP are initially considered to be implementation
or profile specific.
The primary difference between these flavours is in the
authentication server itself, and these are referred to as the
"IETF flavour" and the "UMTS-HSS flavour", where the
Home Subscriber Server (HSS) is a specific UMTS term for a
combined AAA home server (AAAH)/Home Location Register
(HLR) unit. The motivation for network operators to build
up Hiperlan/2 networks based on each flavour may be different
for each operator. However, both flavours offer a maximum
of flexibility through the use of separate Interworking
Units (IWU) and allow loose coupling to existing and future
cellular mobile networks. These alternatives are presented in
figure 2.
IETF flavour.
The IETF flavour outlined in figure 2 is driven
by the requirement to add only minimal software functionality
to the terminals (e.g., by downloading java applets), so
that the use of a Hiperlan/2 mobile access network does not
require a radical change in the functionality (hardware or software
) compared to that required by broadband wireless data
access in the corporate or home scenarios. Within a multiprovider
network, the WLAN operator (who also could be a
normal ISP) does not necessary need to be the 3G operator
as well, but there could still be an interworking between the
networks.
Within this approach Hiperlan/2 users may be either existing
3G subscribers or just Hiperlan/2 network subscribers.
These users want to make use of their existing data devices
(e.g., Laptop, Palmtop) without additional hardware/software
requirements. For both users and mobile operators it is beneficial
to be able to base the user authentication and accounting
on existing cellular accounts, as well as to be able to have
Hiperlan/2-only operators and users; in any case, for reasons
of commonality in MT and AP development it is important to
be able to have a single set of AAA protocols which supports
all the cases.
UMTS-HSS flavour.
Alternatively the UMTS flavour (also
described within figure 1) allows a mobile subscriber using
a Hiperlan/2 mobile access network for broadband wireless
data access to appear as a normal cellular user employing
standard procedures and interfaces for authentication purposes
. It is important to notice that for this scenario functionality
normally provided through a user services identity
module (USIM) is required in the user equipment. The USIM
provides new and enhanced security features in addition to
those provided by 2nd Generation (2G) SIM (e.g., mutual authentication
) as defined by 3GPP (3G Partnership Program).
The UMTS-HSS definitely requires that a user is a native
cellular subscriber while in addition and distinctly from the
IETF flavoured approach standard cellular procedures and
parameters for authentication are used (e.g., USIM quintets).
In this way a mobile subscriber using a Hiperlan/2 mobile access
network for broadband wireless data access will appear
as a normal cellular user employing standard procedures and
interfaces for authentication purposes. It is important to notice
that for this scenario USIM functionality is required in
the user equipment.
46
MCCANN AND FLYGARE
Figure 2. Loose coupling authentication flavours.
For the IETF flavoured approach there is no need to integrate
the Hiperlan/2 security architecture with the UMTS
security architecture [2]. It might not even be necessary to
implement all of the Hiperlan/2 security features if security is
applied at a higher level, such as using IPsec at the IP level.
An additional situation that must be considered is the use of
pre-paid SIM cards. This scenario will introduce additional
requirements for hot billing and associated functions.
4.1.3. EAPOH
For either flavour authentication is carried out using a mechanism
based on EAP (Extensible Authentication Protocol) [3].
This mechanism is called EAPOH (EAP over Hiperlan/2) and
is analogous to the EAPOL (EAP over LANs) mechanism as
defined in IEEE 802.1X. On the network side, Diameter [4]
is used to relay EAP packets between the AP and AAAH.
Between the AP and MT, EAP packets and additional Hiperlan/2
specific control packets (termed pseudo-EAP packets)
are transferred over the radio interface. This scheme directly
supports IETF flavour authentication, and by use of the pro-posed
EAP AKA (Authentication and Key Agreement) mechanism
would also directly support the UMTS flavour authentication
.
Once an association has been established, authorisation information
(based on authentication and subscription) stored
within a Policy Decision Function within the SPN itself can
be transmitted to the AP. This unit is then able to regulate services
such as time-based billing and allocation of network and
radio resources to the required user service. Mobile users with
different levels of subscription (e.g., "bronze, silver, gold")
can be supported via this mechanism, with different services
being configured via the policy interface. A change in authentication
credentials can also be managed at this point.
4.1.4. Key exchange
Key agreement for confidentiality and integrity protection is
an integral part of the UMTS authentication procedure, and
hence the UTRAN confidentiality and integrity mechanisms
should be reused within the Hiperlan/2 when interworking
with a 3G SPN (i.e. core network). This will also increase
the applied level of security.
The Diffie-Hellman encryption key agreement procedure,
as used by the Hiperlan/2 air interface, could be used to improve
user identity confidentiality. By initiating encryption
before UMTS AKA is performed, the user identity will not
have to be transmitted in clear over the radio interface, as
is the case in UMTS when the user enters a network for the
first time. Thus, this constitutes an improvement compared to
UMTS security.
It is also important to have a secure connection between
APs within the same network if session keys or other sensitive
information are to be transferred between them. A secure
connection can either be that they for some reason trust
each other and that no one else can intercept the communication
between them or that authentication is performed and
integrity and confidentiality protection are present.
4.1.5. Subscriber data
There are three basic ways in which the subscriber management
for Hiperlan/2 and 3G users can be co-ordinated:
Have the interworking between the Hiperlan/2 subscriber
database and HLR/HSS. This is for the case where the in-HIPERLAN/2
PUBLIC ACCESS INTERWORKING WITH 3G CELLULAR SYSTEMS
47
terworking is managed through a partnership or roaming
agreement.
The administrative domains' AAA servers share security
association or use an AAA broker.
The Hiperlan/2 authentication could be done on the basis
of a (U)SIM token. The 3G authentication and accounting
capabilities could be extended to support access authentication
based on IETF protocols. This means either integrating
HLR and AAA functions within one unit (e.g.,
a HSS unit), or by merging native HLR functions of the
3G network with AAA functions required to support IP
access.
Based on these different ways for subscriber management,
the user authentication identifier can be on three different formats
:
Network Address Identifier (NAI),
International Mobile Subscriber Identity (IMSI) (requires
a (U)SIM card), and
IMSI encapsulated within a NAI (requires a (U)SIM card).
4.1.6. Pre-paid SIM cards
As far as the HLR within the SPN is concerned, it cannot
tell the difference between a customer who is pre-paid or not.
Hence, this prevents a non-subscriber to this specific 3G network
from using the system, if the operator wishes to impose
this restriction.
As an example, pre-paid calls within a 2G network are
handled via an Intelligent Network (IN) probably co-located
with the HLR. When a call is initiated, the switch can be pro-grammed
with a time limit, or if credit runs out the IN can
signal termination of the call. This then requires that the SPN
knows the remaining time available for any given customer.
Currently the only signals that originate from the IN are to
terminate the call from the network side.
This may be undesirable in a Hiperlan/23G network, so
that a more graceful solution is required. A suitable solution
is to add pre-paid SIM operation to our system together with
hot billing (i.e. bill upon demand) or triggered session termination
. This could be achieved either by the AAAL polling
the SPN utilising RADIUS [5] to determine whether the customer
is still in credit, or by using a more feature rich protocol
such as Diameter [4] which allows network signalling directly
to the MT.
The benefit of the AAA approach is to allow the operator
to present the mobile user with a web page (for example), as
the pre-paid time period is about to expire, allowing them to
purchase more airtime.
All these solutions would require an increased integration
effort with the SPN subscriber management system. Further
additional services such as Customized Applications for
Mobile Network Enhanced Logic (CAMEL) may also allow
roaming with pre-paid SIM cards.
4.2. Accounting
In the reference architecture of figure 2, the accounting function
monitors the resource usage of a user in order to allow
cost allocation, auditing and billing. The charging/accounting
is carried out according to a series of accounting and resource
monitoring metrics, which are derived from the policy function
and network management information.
The types of information needed in order to monitor each
user's resource consumption could include parameters such
as, for example, volume of traffic, bandwidth consumption,
etc. Each of these metrics could have AP specific aspects
concerning the resources consumed over the air interface and
those consumed across the SPN, respectively. As well as providing
data for billing and auditing purposes, this information
is exchanged with the Policy Enforcement/Decision functions
in order to provide better information on which to base policy
decisions.
The accounting function processes the usage related information
including summarisation of results and the generation
of session records. This information may then be forwarded
to other accounting functions within and outside the network,
for example a billing function. This information may also be
passed to the Policy Decision function in order to improve
the quality of policy decisions; vice versa the Policy Decision
function can give information about the QoS, which may affect
the session record. There are also a number of extensions
and enhancements that can be made to the basic interworking
functionality such as those for the provision of support for
QoS and mobility.
In a multiprovider network, different sorts of inter-relationships
between the providers can be established. The inter-relationship
will depend upon commercial conditions, which
may change over time. Network Operators have exclusive
agreements with their customers, including charging and
billing, and also for services provided by other Network Op-erators/Service
Providers. Consequently, it must be possible
to form different charging and accounting models and this requires
correspondent capabilities from the networks.
Charging of user service access is a different issue from the
issue of accounting between Network Operators and Service
Providers. Although the issues are related, charging and accounting
should be considered separately. For the accounting
issue it is important for the individual Network Operator or
Service Provider to monitor and register access use provided
to his customers.
Network operators and service providers that regularly
provide services to the same customers could either charge
and bill them individually or arrange a common activity. For
joint provider charging/billing, the providers need revenue accounting
in accordance with the service from each provider.
For joint provider charging of users, it becomes necessary
to transfer access/session related data from the providers to
the charging entity. Mechanisms for revenue accounting are
needed, such as technical configuration for revenue accounting
. This leads to transfer of related data from the Network
48
MCCANN AND FLYGARE
Operator and/or Service Providers to the revenue accounting
entity.
The following parameters may be used for charging and
revenue accounting:
basic access/session (pay by subscription),
toll free (like a 0800 call),
premium rate access/session,
access/session duration,
credit card access/session,
pre-paid,
calendar and time related charging,
priority,
Quality of Service,
duration dependent charging,
flat rate,
volume of transferred packet traffic,
rate of transferred packet traffic (Volume/sec),
multiple rate charge.
4.3. Mobility
Mobility can be handled by a number of different approaches.
Indeed many mobility schemes have been developed in the
IETF that could well be considered along with the work of the
MIND (Mobile IP based Network Developments) project that
has considered mobility in evolved IP networks with WLAN
technologies. Mobility support is desirable as this functionality
would be able to provide support for roaming with an
active connection between the interworked networks, for example
, to support roaming from UMTS to WLAN in a hotspot
for the downloading of large data.
In the loose coupling approach, the mobility within the
Hiperlan/2 network is provided by native Hiperlan/2 (i.e.
RLC layer) facilities, possibly extended by the Convergence
Layer (CL) in use (e.g., the current Ethernet CL [6], or a future
IP CL). This functionality should be taken unchanged
in the loose coupling approach, i.e. handover between access
points of the same Hiperlan/2 network does not need
to be considered especially here as network handover capabilities
of Hiperlan/2 RLC are supported by both MTs and
APs.
Given that Hiperlan/2 network handover is supported, further
details for completing the mobility between access points
are provided by CL dependent functionality.
Completion of this functionality to cover interactions between
the APs and other parts of the network (excluding the
terminal and therefore independent of the air interface) are
currently under development outside BRAN. In the special
case where the infrastructure of a single Hiperlan/2 network
spans more than one IP sub-network, some of the above approaches
assume an additional level of mobility support that
may involve the terminal.
4.3.1. Roaming between Hiperlan/2 and 3G
For the case of mobility between Hiperlan/2 and 3G access
networks, recall that we have the following basic scenario:
A MT attaches to a Hiperlan/2 network, authenticates and acquires
an IP address. At that stage, it can access IP services
using that address while it remains within that Hiperlan/2 network
. If the MT moves to a network of a different technology
(i.e. UMTS), it can re-authenticate and acquire an IP address
in the packet domain of that network, and continue to use IP
services there.
We have referred to this basic case as AAA roaming. Note
that while it provides mobility for the user between networks,
any active sessions (e.g., multimedia calls or TCP connections
) will be dropped on the handover between the networks
because of the IP address change (e.g., use Dynamic Host
Configuration Protocol DHCP).
It is possible to provide enhanced mobility support, including
handover between Hiperlan/2 access networks and 3G access
networks in this scenario by using servers located outside
the access network. Two such examples are:
The MT can register the locally acquired IP address with
a Mobile IP (MIP) home agent as a co-located care-of address
, in which case handover between networks is handled
by mobile IP. This applies to MIPv4 and MIPv6 (and
is the only mode of operation allowed for MIPv6).
The MT can register the locally acquired IP address with
an application layer server such as a Session Initiation Protocol
(SIP) proxy. Handover between two networks can
then be handled using SIP (re-invite message).
Note that in both these cases, the fact that upper layer mobility
is in use is visible only to the terminal and SPN server,
and in particular is invisible to the access network. Therefore,
it is automatically possible, and can be implemented according
to existing standards, without impact on the Hiperlan/2
network itself. We therefore consider this as the basic case
for the loose coupling approach.
Another alternative is the use of a Foreign Agent care-of
address (MIPv4 only). This requires the integration of Foreign
Agent functionality with the Hiperlan/2 network, but has
the advantage of decreasing the number of IPv4 addresses that
have to be allocated. On the other hand, for MTs that do not
wish to invoke global mobility support in this case, a locally
assigned IP address is still required, and the access network
therefore has to be able to operate in two modes.
Two options for further study are:
The option to integrate access authentication (the purpose
of this loose coupling standard) with Mobile IP
home agent registration (If Diameter is used, it is already
present). This would allow faster attach to the network in
the case of a MT using MIP, since it only requires one set
of authentication exchanges; however, it also requires integration
on the control plane between the AAAH and the
Mobile IP home agent itself. It is our current assumption
that this integration should be carried out in a way that is
independent of the particular access network being used,
and is therefore out of scope of this activity.
HIPERLAN/2 PUBLIC ACCESS INTERWORKING WITH 3G CELLULAR SYSTEMS
49
The implications of using services (e.g., SIP call control
) from the UMTS IMS (Internet Multimedia Subsys-tem
), which would provide some global mobility capability
. This requires analysis of how the IMS would interface
to the Hiperlan/2 access network (if at all).
4.3.2. Handover
For handovers within the Hiperlan/2 network, the terminal
must have enough information to be able to make a handover
decision for itself, or be able to react to a network decision to
handover. Indeed these decision driven events are referred to
as triggers, resulting in Network centric triggers or Terminal
centric triggers.
Simple triggers include the following:
Network Centric: Poor network resources or low bandwidth
, resulting in poor or changing QoS. Change of policy
based on charging (i.e. end of pre-paid time).
Terminal Centric: Poor signal strength. Change of QoS.
4.4. QoS
QoS support is available within the Hiperlan/2 specification
but requires additional functionality in the interworking specifications
for the provision of QoS through the CN rather than
simply over the air. QoS is a key concept, within UMTS, and
together with the additional QoS functionality in Hiperlan/2,
a consistent QoS approach can therefore be provided. A number
of approaches to QoS currently exist which still need to
be considered at this stage.
QoS within the Hiperlan/2 network must be supported between
the MT and external networks, such as the Internet. In
the loose coupling scenario, the data path is not constrained
to travelling across the 3G SPN, e.g., via the SGSN/GGSNs.
Therefore no interworking is required between QoS mechanisms
used within the 3G and Hiperlan/2 network. There is a
possible interaction regarding the interpretation and mapping
of UMTS QoS parameters onto the QoS mechanisms used
in the Hiperlan/2 network. The actual provisioning of QoS
across the Hiperlan/2 network is dependent on the type of the
infrastructure technology used, and therefore the capabilities
of the CL.
4.4.1. HiperLAN2/Ethernet QoS mapping
Within the Hiperlan/2 specification, radio bearers are referred
to as DLC connections. A DLC connection is characterised
by offering a specific support for QoS, for instance in terms of
bandwidth, delay, jitter and bit error rate. The characteristics
of supported QoS classes are implementation specific. A user
might request for multiple DLC connections, each transferring
a specific traffic type, which indicates that the traffic division
is traffic type based and not application based. The DLC
connection set-up does not necessarily result in immediate assignment
of resources though. If the MT has not negotiated a
fixed capacity agreement with the AP, it must request capacity
by sending a resource request (RR) to the AP whenever it has
data to transmit. The allocation of resources may thereby be
very dynamic. The scheduling of the resources is vendor specific
and is therefore not included in the Hiperlan/2 standard,
which also means that QoS parameters from higher layers are
not either.
Hiperlan/2 specific QoS support for the DLC connection
comprises centralised resource scheduling through the
TDMA-based MAC structure, appropriate error control (acknowledged
, unacknowledged or repetition) with associated
protocol settings (ARQ window size, number of retransmis-sions
and discarding), and the physical layer QoS support.
Another QoS feature included in the Hiperlan/2 specification
is a polling mechanism that enables the AP to regularly poll
the MT for its traffic status, thus providing rapid access for
real-time services. The CL acts as an integrator of Hiperlan/2
into different existing service provider networks, i.e. it
connects the SPNs to the Hiperlan/2 data link control (DLC)
layer.
IEEE 802.1D specifies an architecture and protocol for
MAC bridges interconnecting IEEE 802 LANs by relaying
and filtering frames between the separate MACs of the
Bridged LAN. The priority mechanism within IEEE 802.1D
is handled by IEEE 802.1p, which is incorporated into IEEE
802.1D. All traffic types and their mappings presented in the
tables of this section only corresponds to default values specified
in the IEEE 802.1p standard, since these parameters are
vendor specific.
IEEE 802.1p defines eight different priority levels and describes
the traffic expected to be carried within each priority
level. Each IEEE 802 LAN frame is marked with a user priority
(07) corresponding to the traffic type [8].
In order to support appropriate QoS in Hiperlan/2 the
queues are mapped to the different QoS specific DLC connections
(maximum of eight). The use of only one DLC connection
between the AP and the MT results in best effort traffic
only, while two to eight DLC connections indicates that the
MT wants to apply IEEE 802.1p. A DLC connection ID is
only MT unique, not cell unique.
The AP may take the QoS parameters into account in the
allocation of radio resources (which is out of the Hiperlan/2
scope). This means that each DLC connection, possibly operating
in both directions, can be assigned a specific QoS,
for instance in terms of bandwidth, delay, jitter and bit error
rate, as well as being assigned a priority level relative to
other DLC connections. In other words, parameters provided
by the application, including UMTS QoS parameters if desired
, are used to determine the most appropriate QoS level
to be provided by the network, and the traffic flow is treated
accordingly.
The support for IEEE 802.1p is optional for both the MT
and AP.
4.4.2. End-to-end based QoS
Adding QoS, especially end-to-end QoS, to IP based connections
raises significant alterations and concerns since it represents
a digression from the "best-effort" model, which constitutes
the foundation of the great success of Internet. However,
the need for IP QoS is increasing and essential work is cur-50
MCCANN AND FLYGARE
rently in progress. End-to-end IP QoS requires substantial
consideration and further development.
Since the Hiperlan/2 network supports the IEEE 802.1p
priority mechanism and since Differentiated Services (DiffServ
) is priority based, the natural solution to the end-to-end
QoS problem would be the end-to-end implementation
of DiffServ. The QoS model would then appear as follows.
QoS from the MT to the AP is supported by the Hiperlan/2
specific QoS mechanisms, where the required QoS for each
connection is identified by a unique Data Link Control (DLC)
connection ID. In the AP the DLC connection IDs may be
mapped onto the IEEE 802.1p priority queues. Using the
IEEE 802.1p priority mechanisms in the Ethernet, the transition
to a DiffServ network is easily realised by mapping the
IEEE 802.1p user priorities into DiffServ based priorities.
Neither the DiffServ nor the IEEE 802.1p specification
elaborates how a particular packet stream will be treated
based on the Differentiated Services (DS) field and the layer
2 priority level. The mappings between the IEEE 802.1p priority
classes and the DiffServ service classes are also unspec-ified
. There is however an Integrated Services over Specific
Link Layers (ISSLL) draft mapping for Guaranteed and Controlled
Load services to IEEE 802.1p user priority, and a mapping
for Guaranteed and Controlled Load services, to DiffServ
which together would imply a DiffServ to IEEE 802.1p
user priority mapping.
DiffServ provides inferior support of QoS than IntServ, but
the mobility of a Hiperlan/2 MT indicates a need to keep the
QoS signalling low. IntServ as opposed to DiffServ involves
significant QoS signalling.
The DiffServ model provides less stringent support of QoS
than the IntServ/RSVP model but it has the advantage over
IntServ/RSVP of requiring less protocol signalling, which
might be a crucial factor since the mobility of a Hiperlan/2
MT indicates a need to keep the QoS signalling low. Furthermore
, the implementation of an end-to-end IntServ/RSVP
based QoS architecture is much more complex than the implementation
of a DiffServ based one.
Discussions around end-to-end QoS support raise some
critical questions that need to be considered and answered before
a proper solution can be developed; which performance
can we expect from the different end-to-end QoS models,
what level of QoS support do we actually need, how much
bandwidth and other resources are we willing to sacrifice on
QoS, and how much effort do we want to spend on the process
of developing well-supported QoS?
Relationships with other standardisation bodies
BRAN is continuing to have a close working relationship with
the following bodies:
WLAN Interworking Group (WIG)
This group met for the first time in September 2002. Its broad
aim is to provide a single point of contact for the three main
WLAN standardisation bodies (ETSI BRAN, IEEE 802.11
and MMAC HSWA) and to produce a generic approach to
both Cellular and external network interworking of WLAN
technology. It has been also decided to work upon, complete
and then share a common standard for WLAN Public Access
and Cellular networks.
3rd Generation Partnership Project (3GPP)
The System Architecture working group 1 (SA1) is currently
developing a technical report detailed the requirements for
a UMTSWLAN interworking system. They have defined
6 scenarios detailing aspects of differently coupled models,
ranging from no coupling, through loose coupling to tight
coupling. Group 2 (SA2) is currently investigating reference
architecture models, concentrating on the network interfaces
towards the WLAN. Group 3 (SA3) has now started work on
security and authentication issues with regard to WLAN interworking
. ETSI BRAN is currently liasing with the SA2
and SA3 groups.
Internet Engineering Task Force (IETF)
Within the recently created `eap' working group, extensions
are being considered to EAP (mentioned in section 4), which
will assist in system interworking.
Institute of Electrical and Electronics Engineers (IEEE)
USA
The 802.11 WLAN technical groups are continuing to progress
their family of standards. Many similarities exist between
the current 802.11a standard and Hiperlan2/HiSWANa
with regard to 3G interworking. ETSI BRAN is currently liasing
with the Wireless Next Generation (WNG) group of the
IEEE 802.11 project.
Multimedia Mobile Access Communication (MMAC) Japan
The High Speed Wireless Access (HSWA) group's HiSWANa
(High Speed Wireless Access Network system A) is essentially
identical to Hiperlan/2, except that it mandates the use
of an Ethernet convergence layer within the access point. An
agreement between ETSI BRAN and MMAC HSWA has now
been in place for some time to share the output of the ETSI
BRAN 3G interworking group.
Conclusions
This paper has addressed some of the current thinking within
ETSI BRAN (and indeed WIG) regarding the interworking of
the Hiperlan2 and HiSWANa wireless LAN systems into a 3G
Cellular System. Much of this information is now appearing
in the technical specification being jointly produced by ETSI
and MMAC, expected to be published in the first half of 2003.
Of the two initial solutions investigated (tight and loose
coupling), current work has concentrated on the loose variant,
producing viable solutions for security, mobility and QoS.
The authentication schemes chosen will assume that EAP is
carried over the air interface, thus being compatible, at the
interworking level, with IEEE 802.11 and 3GPP.
HIPERLAN/2 PUBLIC ACCESS INTERWORKING WITH 3G CELLULAR SYSTEMS
51
This standardisation activity thus hopes to ensure that all
WLAN technologies can provide a value added service within
hotspot environments for both customers and operators of 3G
systems.
Acknowledgements
The authors wish to thank Maximilian Riegel (Siemens AG,
Germany), Dr. Robert Hancock and Eleanor Hepworth (Roke
Manor, UK) together with se Jevinger (Telia Research AB,
Sweden) for their invaluable help and assistance with this
work.
References
[1] ETSI TR 101 957 (V1.1.1):
Broadband Radio Access Networks
(BRAN); HIPERLAN Type 2; Requirements and Architectures for Interworking
between Hiperlan/2 and 3rd Generation Cellular Systems (August
2001).
[2] 3GPP TS 33.102: 3rd Generation Partnership Project; Technical Specification
Group Services and System Aspects; 3G Security; Security Architecture
.
[3] L. Blunk, J. Vollbrecht and B. Aboba, PPP Extensible Authentication
Protocol (EAP), RFC 2284bis, draft-ietf-pppext-rfc2284bis
-04.txt
(April 2002).
[4] P. Calhoun et al., Diameter base protocol, draft-ietf-aaa-diameter
-10
(April 2002).
[5] C. Rigney et al., Remote Authentication Dial In User Service (RADIUS),
RFC 2058 (January 1997).
[6] HIPERLAN Type 2; Packet Based Convergence Layer; Part 2: Ethernet
Service Specific Convergence Sublayer (SSCS), ETSI TS 101 493-2,
BRAN.
[7] HIPERLAN Type 2; System overview, ETSI TR 101 683, BRAN.
[8] Information Technology Telecommunications and Information Exchange
between Systems Local and Metropolitan Area Networks
Common Specifications Part 3: Media Access Control (MAC) Bridges
(Revision and redesignation of ISO/IEC 10038: 1993 [ANSI/IEEE
Std 802.1D, 1993 Edition], incorporating IEEE supplements P802.1p,
802.1j-1996, 802.6k-1992, 802.11c-1998, and P802.12e)", ISO/IEC
15802-3: 1998.
Stephen McCann holds a B.Sc. (Hons) degree from
the University of Birmingham, England. He is currently
editor of the ETSI BRAN "WLAN3G" interworking
specification, having been involved in
ETSI Hiperlan/2 standardisation for 3 years. He is
also involved with both 802.11 work and that of the
Japanese HiSWANa wireless LAN system. In the
autumn of 2002, Stephen co-organised and attended
the first WLAN Interworking Group (WIG) between
ETSI BRAN, MMAC HSWA and IEEE 802.11. He
is currently researching multimode WLAN/3G future terminals and WLAN
systems for trains and ships, together with various satellite communications
projects. In parallel to his Wireless LAN activities, Stephen has also been
actively involved in the `rohc' working group of the IETF, looking at various
Robust Header Compression schemes. Previously Stephen has been involved
with avionics and was chief software integrator for the new Maastricht air
traffic control system from 1995 to 1998. He is a chartered engineer and a
member of the Institute of Electrical Engineers.
E-mail: [email protected]
Helena Flygare holds a M.Sc. degree in electrical
engineering from Lund Institute of Technology,
Sweden, where she also served as a teacher in Automatic
Control for the Master Degree program. Before
her present job she worked in various roles with
system design for hardware and software development
. In 1999 she joined Radio System Malm at
Telia Research AB. She works with specification, design
and integration between systems with different
access technologies, e.g. WLANs, 2.5/3G, etc. from
a technical, as well as from a business perspective. Since the year 2000, she
has been active with WLAN interworking with 3G and other public access
networks in HiperLAN/2 Global Forum, ETSI/BRAN, and 3GPP.
E-mail: [email protected] | Hiperlan/2;interworking;3G;ETSI;BRAN;WIG;public access |
102 | 2D Information Displays | Many exploration and manipulation tasks benefit from a coherent integration of multiple views onto complex information spaces. This paper proposes the concept of Illustrative Shadows for a tight integration of interactive 3D graphics and schematic depictions using the shadow metaphor. The shadow metaphor provides an intuitive visual link between 3D and 2D visualizations integrating the different displays into one combined information display. Users interactively explore spatial relations in realistic shaded virtual models while functional correlations and additional textual information are presented on additional projection layers using a semantic network approach. Manipulations of one visualization immediately influence the others, resulting in an in-formationally and perceptibly coherent presentation. | INTRODUCTION
In many areas knowledge about structures and their meaning
as well as their spatial and functional relations are required
to comprehend possible effects of an intervention.
For example, engineers must understand the construction of
machines as a prerequisite for maintenance whereas the
spatial composition of molecules and hence possible reactions
are of importance for the discovering of new drugs in
chemistry. Medical students need to imagine the wealth of
spatial and functional correlations within the human body
to master anatomy.
To date, novices as well as domain experts are required to
consult several, often voluminous documents in parallel to
extract information for a certain intervention. Spatial relations
, characteristics of structures inherently three-dimensional
, such as the shape and location of structures, however
, are difficult to convey on paper. Besides requiring a
significant amount of images to illustrate spatial relations
between only a few structures, the mental integration of
multiple views to form a three-dimensional picture in mind
is demanding. Spatial relations can be conveyed more ef-fectively
by means of 3D models <A href="102.html#8">[18]. Using interactive 3D
graphics, available to more and more people due to recent
advances in consumer graphics hardware, the user may actively
explore the spatial correlations of structures within a
photorealistic virtual model (see upper left of <A href="102.html#1">Figure 1).
Here, the visual realism of the model facilitates recognition
on real encounters.
Information about functional correlations, such as the interplay
of muscles causing an upward motion of the human
foot, has been traditionally provided by means of text and
illustrations as found in textbooks. Simple, non-photorealistic
drawings enriched with annotations and metagraphical
symbols can be extremely powerful in conveying complex
relationships and procedures (see upper right of <A href="102.html#1">Figure 1).
Abstraction techniques reduce the complexity of the depicted
structures to illustrate the important aspects thereby
guiding the attention of the viewer to relevant details. In
contrast to the visualization of spatial relations, 3D graphics
add no significant value to the illustration of functional
correlations.
Figure 1: Illustrative Shadows provide an intuitive, visual link
between spatial (3d) and non-spatial (2d) information displays
integrating them into one combined information display.
M. tibialis anterior
M. extensor hallucis longus
M. extensor digitorum longus
M. tibialis posterior
M. flexor digitorum longus
M. flexor hallucis longus
M. fibularis brevis
M. fibularis longus
interactive 3d-graphic
2d-information display
shadow
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee.
IUI'03, January 1215, 2003, Miami, Florida, USA.
Copyright 2003 ACM 1-58113-586-6/03/0001...$5.00.
166
The integration of both aspects in one visualization is difficult
since each serves to fulfill a different goal with, partly
mutually exclusive, visualization techniques. It becomes
even more complicated if the 3D model is frequently manipulated
, such as in construction or recent interactive
learning environments. Here, occludings by annotations
and metagraphical symbols are annoying and may even interrupt
the current manipulation for the user. Additional
views, whether as insets <A href="102.html#8">[20], separate objects like mirrors
<A href="102.html#8">[8], or in form of lenses and volumetric cursors changing
the rendition of embedded structures <A href="102.html#8">[7, 21, 23], are either
not close enough to the manipulated structures to be fully
recognized without dividing the users attention between
different views <A href="102.html#8">[13] or require sometimes tedious manipulations
to be placed or moved within the scene. Nonetheless
, additional information as to restrictions or functional
correlations pertaining the current manipulated structures is
highly desired, even necessary.
In this paper we present an approach called Illustrative
Shadows that provides an intuitive, visual link between an
actively manipulated 3D visualization and a supplemental
2D information display integrating them into one combined
information display (see <A href="102.html#1">Figure 1). One of the main ideas
behind Illustrative Shadows is the integration of secondary
information, or in other words, background information,
into an interactive 3D scene. By analyzing the users' manipulation
of 3D structures and finding correlations, graphical
and textual information about the current interaction
context, such as graphical object-details and textual labels,
are displayed in the ``background''--the shadow--to give
guidance as well as to further enhance the users' understanding
of the context.
The paper is structured as follows: After reviewing related
approaches to combine multiple visual and textual information
displays, we present the design of Illustrative Shadows.
Furthermore, an architecture realizing these concepts is discussed
in this section. Thereafter, the major components of
this architecture are described. Realization issues are subject
of the subsequent section, whereas application examples
and the summary conclude the paper.
RELATED WORK
Recently proposed tools for the exploration of virtual
scenes extend possibilities to display covered structures or
hidden details. A complementary view called Magic Mirror
that mimics a hand mirror has been introduced in <A href="102.html#8">[8]. In addition
to providing the optical effects of a real mirror, it also
allows to explore the insight of objects by clipping against
the mirror front-frustum. Magic lens filters as presented in
<A href="102.html#8">[7, 21] go further by combining an arbitrarily-shaped region
with an operator that changes the view of objects viewed
through that region thereby displaying different aspects of
the visualized information space. In <A href="102.html#8">[5, 23] the 2D lens approach
is extended to 3D using volumetric lenses. All
aforementioned techniques require the user to actively manipulate
a tool within the scene. These techniques assume
the user already knows which parts of the presented visualization
offer additional information or is willing to explore
the model. While this might be feasible in explorative environments
where navigation is the main interaction task, it
certainly hinders manipulation.
Several approaches to combine 3D and 2D visualizations
have been made using a corner cube environment (
). The
three orthogonal sides show image slices that provide a visual
context for a 3D model or structures displayed in the
center. In <A href="102.html#8">[11] the images have been integrated as back-planes
to ground the 3D representation of anatomic structures
visually. By outlining the 3D structures in the images,
the spatial correspondence between the 3D renditions of activated
foci in the context of human brain slices is emphasized
in <A href="102.html#8">[16]. The images, however are precomputed and do
not change according to the users' interaction nor is there
any visualization of functional correlations. An interesting
interactive approach has been proposed by <A href="102.html#8">[10]. The projection
of the 3D model onto the sides of the corner cube
can be manipulated by the user in order to change the position
and orientation of the model. Fully rendered shadows
of certain objects resemble real-world mirrors and may be
used to stress importance. There is, however, no discussion
on how to use this feature to provide, for instance, additional
context information for the user.
To establish hypotheses on the interaction context in order
to be able to display additional context information and to
provide meaningful descriptions of relationships knowledge
modeling is required. Promising approaches to connect
those knowledge with 3D graphics have been developed
in the area of medical applications. The Digital
Anatomist <A href="102.html#8">[4] incorporates a logic-based description comprising
class and subclass relationships (is-a) as well as partitive
and qualitative spatial relationships (has-parts, is-superior
-to). The information is presented in tree-like textual
form that can be explored by folding and unfolding. Corresponding
structures are displayed in a 3D visualization
aside. There is no visual integration of both information
displays. The semantic network described in <A href="102.html#8">[14] is used to
create various `views' in which correlating structures are
displayed to communicate specific aspects with a voxel
model of the human anatomy. The highly detailed visualization
, however, cannot be interactively explored, nor is
there any kind of abstraction to focus the users' attention.
Interaction is only possible by tree-like menus.
SYSTEM DESIGN USING ILLUSTRATIVE SHADOWS
With the term Illustrative Shadows we refer to a coherent
integration of photorealistic depictions of a virtual model
with abstract illustrations and textual explanations. Both
Figure 2: Architecture of a system incorporating the Illustrative
Shadows approach.
3D
model
3D
visualization
2D
visualization
Visualization
annotations
Knowledge
-based
server
Client interface
System
Event control
167
kinds of depictions serve to fulfill different and somehow
contradicting goals: on the one hand to enable navigation
and manipulation of complex spatial models and on the
other hand to provide adjusted visualizations that guide the
user's attention to additional information about the most
relevant objects in the current interaction context. Both visualizations
are achieved by applying photorealistic and
specific non-photorealistic rendering techniques <A href="102.html#8">[22] to
geometric models.
Furthermore, textual information describing the most relevant
structures and functional correlations between them
must be integrated. The estimation of the relevance with respect
to the current interaction context as well as the selection
or generation of textual explanations heavily rely on
non-geometric formal and informal representations and are
therefore determined by external inference mechanisms.
Moreover, co-referential relations between the entities
within the geometric model and the formal and informal
representations have to be established in order to link the
different representations.
Based on these requirements we designed a system architecture
which comprises three basic components (see
<A href="102.html#2">Figure 2):
The visual component renders a photorealistic 3D
model with a standard camera model as well as a non-photorealistic
illustration that is projected onto a
ground plane. A client interface enables external control
of the non-photorealistic rendering techniques. Finally
, the visual component also renders text and
metagraphical annotations, such as labels, hypertext
and arrows.
The event control allows the user to modify the parameters
of a virtual camera and to select and manipulate
geometric objects within the scene. Interactions are
tracked and ranked within an interaction history to
communicate the current interaction context via a client
interface to an external knowledge-based component.
The knowledge-based server receives notification of
user manipulations and establishes hypotheses on the
degree of interest values (DOI) for geometric objects.
These DOI values guide the selection of appropriate
text fragments presented in text annotations as well as
the modification of parameters of the non-photorealistic
rendering techniques for emphasizing in the illustration
.
The following sections discuss important aspects of these
system components.
VISUALIZATION
Besides displaying a photorealistic rendition of a 3D model
that the user can manipulate, the illustration of functional
correlations between structures of the model in the "background"
has to be accomplished. To focus the user's attention
on relevant structures and to facilitate perception, important
objects must be emphasized and surroundings
abstracted. Furthermore, both visualizations must be integrated
in a coherent manner, so that a visual connection between
the 3D and 2D renditions of the relevant objects is
established by the user. Several crucial aspects have to be
considered:
How can objects be emphasized such that they attract
the user's attention while still being in the background?
Are additional graphical elements required to establish
a visual correlation between the two model representations
?
What illustration techniques can be applied to differentiate
between important and less important objects?
Is a continuous synchronization between the photorealistic
and the schematic representation of the model necessary
during user interactions?
Integrating Different Model Representations
The question coming up at this point is how a secondary,
schematic model representation can be integrated such that
the following requirements are fulfilled:
The second representation must be placed near the original
3D representation in order to perceive structures in
<A href="102.html#8">both representations [13].
The secondary representation may never occlude the
central, realistic 3D visualization, which a user manipulates
directly. The relevant information, however, must
be visible in order to be recognized but should not distract
from the interaction with the 3D model.
An exact copy of the 3D model representation is not appropriate
for the task of depicting functional correlations, because
their illustration in the "background" requires abstraction
. Without abstraction, a lot of the users' attention
would be required to extract the relevant information. An
exact copy, however, can be used to provide a mirrored
view below the 3D model to visualize structures otherwise
not visible for the user (see <A href="102.html#7">Figure 9 at the end). The integration
of a secondary view as inset <A href="102.html#8">[20] has some disadvantages
too. Firstly, the inset must not occlude the 3D
model, thus it must be placed in distance which in turn
means visual separation. Secondly, the inset framing complicates
the visualization of object correlations, e.g. by
lines.
An ideal solution in many respects is to project the 3D
model onto a plane below the model, just like casting a
shadow. This 2D representation may then be modified in
various ways to illustrate associated concepts and relations
and therefore is called Illustrative Shadows. Besides, this
approach satisfies the requirements specified above.
Illustrative Shadows
Cast shadows <A href="102.html#8">[3] have proven to be beneficial for perceiving
spatial relationships and to facilitate object positioning
Figure 3: Different types of model projections onto the illustration
plane. Beside simple, monochrome projections (shadows),
the color of individual structures can be preserved. The mirrored
projection shows details otherwise hidden in the current view.
;;
;;;
;;;
;;
;;
;;
;;
Monochrome projection
Colored projection
Mirrored projection
168
<A href="102.html#8">[24]. Thus, their use, if already present, occupies no additional
space for the display of additional information, or in
the case of prior absence, also add valuable depth cues to
the 3D visualization. Furthermore, the shadows can be used
to interact with the underlying information context or, as
proposed in <A href="102.html#8">[10], with the shadowing 3D objects. To sum
up:
The shadow projection results in an abstraction which
is very important for illustrations.
Additional depth cues facilitate the perception of spatial
relations and accelerate the relative positioning of objects
while manipulating the 3D model.
The projection establishes a link between a 3D object
and its 2D shadow providing additional information.
Displaying and Focusing in the Illustration Plane
Besides simple, monochrome shadow projection further
possibilities to project the 3D model representation onto the
illustration plane come to mind (see <A href="102.html#3">Figure 3). Preserving
the colors of the different structures of the model, for instance
, enables distinct renditions and perception of the objects
in the shadow. As an extension, the objects can be mirrored
before projecting them. Hence, objects or hidden
details of objects otherwise not visible become visible in
the illustration plane.
To illustrate correlations or to annotate structures, the relevant
objects must be emphasized to be easily distinguished
from the remaining 2D visualization. Also, the viewer must
be able to differentiate between relevant and less relevant
objects. For monochrome shadow projections, the object
color must contrast with the shadow color. The selection of
emphasizing colors should be based on a perception oriented
color model, such as the HSV. Moreover, an outline
can be used to attract the viewer's interest. A colored projection
makes it somewhat more difficult since the variation
of the color won't always result in a noticeable distinct representation
. By using a conspicuous texture or shaded representations
of the relevant object whereas the remaining
objects are flat-shaded, the viewer's attention can be directed
to those relevant objects.
Significance of Accentuations
In addition to objects being significant to the current interaction
context supplementary objects have to be included in
the illustration. These objects are not of primary importance
within the concepts to be illustrated but guide the
viewer and maintain context. It is important that such objects
are recognized by the viewer as objects of minor significance
. The relevance of objects that have been emphasized
by outlining, for instance, can be judged by line width
or line style (e.g. contrast, waviness). Also preserving the
objects color as well as the use of texture indicates a higher
importance than an interpolation of the object's color and
the background color.
Recognition of Correlations between both Representations
An important aspect in using two different but coherent representations
of the same model is the identification of correlations
between those visualizations by the viewer. If an object
is being emphasized in the illustration plane, it must be
possible for the viewer to find its counterpart in the detailed
photorealistic representation too. Often shape and color
give enough hints. However, if the projection of the relevant
objects results in uniform shapes, the viewer may have difficulties
to recognize individual objects in the 3D visualization
(see <A href="102.html#4">Figure 4). Besides accentuating those objects in
the 3D representation, the integration of additional elements
can be beneficial. Semi-transparent shadow volumes
originally developed to facilitate object positioning in 3D
<A href="102.html#8">[19] indicate direct correspondence (see Figure <A href="102.html#4">5).
Integration of Annotations
The conveyance of important related facts by means of
graphical abstraction, accentuation, or modification of relevant
structures alone is difficult. Therefore conventional
book illustrations often contain annotations with short descriptions
. Those annotations must be placed close to the
described objects and should not occlude relevant parts of
the presentation. The latter, however, cannot always be
guaranteed. A simple but effective solution places the annotation
on a semi-transparent background face that increases
the contrast between text annotation and illustration and
still does not block vision (see <A href="102.html#4">Figure 5). To further facilitate
absorption of shown concepts, single words or groups
of words can be emphasized and graphically linked to relating
structures in the illustration. Hypertext functionality reduces
the amount to which textual annotations must be displayed
at once. The user can request more detailed
information by activating links.
Figure 4: Recognition of individual objects in different scenes.
To the left, unequivocal correspondence of shape and color
facilitates identification. To the right, no clear correspondence.
Figure 5: A direct connection to an object's shadow is established
by displaying its semi-transparent shadow volume. The
integration of annotations gives meaning to unknown objects and
relationships.
169
INTERACTION
A human illustrator is required to identify important aspects
and characteristic features of the subject or concept that is
to be conveyed in order to draw a focussed visualization.
One way to identify those features for the computer is to
watch the user interacting with the information space. Since
our goal has been to enhance the users' understanding by
providing background information in the current interaction
context, an Illustrative Shadow depicting correlations in
that context must be generated.
By navigating within the 3D model on the one hand, the
user is free to explore spatial relations by changing the
view. Here, single structures may be tracked by the computer
regarding their visibility hence obtaining information
about the users' current focus. On the other hand, the user
may interact with the structures, thus expressing specific interest
. As a result of the integrated 3D/2D visualization, interaction
is possible within the 3D visualization as well as
in the projection layers, a technique inspired by <A href="102.html#8">[10]. Using
the shadow for interaction facilitates certain tasks, such as
selection, since structures hidden in the 3D visualization
may be visible in the projection. Furthermore, 2D input device
coordinates can be mapped directly onto the plane
thereby enabling the use of 2D interaction techniques.
The provided manipulation tasks highly depend on the application
that employs the concept of Illustrative Shadows.
In our application, the user is able to compose and to decompose
a given 3D model like a 3D jigsaw. Thus, translation
and rotation of individual structures are main interaction
tasks.
User-interactions are tracked within an interaction history.
By assigning relevance values to each interaction task, accumulations
of these values show a distribution of interest
within the model over time. Thus each single structure's degree
of interest (DOI, a normalized value) is a measure for
its importance to the user at a certain time. As shown in
<A href="102.html#5">Table 1, touching a structure with the mouse pointer has a
much lower relevance than actually selecting it. The degree
of interest is communicated to the knowledge server which
in turn may modify the 2D visualization of the shadow. To
give an example, the user is interested in a certain structure
of an anatomic 3D visualization, such as a ligament, that is
part of a functional relation between a bone and a muscle.
Only one of those objects, that is the bone or the muscle,
should be highlighted and annotated, because of space restrictions
in the shadow layer. At this point, the DOI is used
to decide. If the interaction history shows more user-interest
for muscles, information about the functional relation
between the ligament and the muscle is displayed.
KNOWLEDGE MODELING
While segmentation of the 3D model into individual structures
(objects) provides a spatial description, the presentation
of correlations also requires a linked symbolic, textual
description. Moreover, in order to establish hypotheses on
the current interaction context, formal knowledge is required
. Thus, the system presented in this paper comprises
a knowledge base, i.e. a media-independent formal representation
, media-specific realization statements of entities
within the formal representation as well as a large multi-lingual
text corpus. Realization statements establish co-reference
relations between independent formal representations
describing different aspects of the underlying information
space. They also guide the generation or selection of texts
used to annotate structures in the 2D visualization.
The medical education application presented later in this
paper is based on a knowledge base describing the objects
and functional correlations of the musculo-skeletal system.
It covers the area of the lower limb and the pelvic girdle.
The knowledge base was created by manually analyzing
several anatomy textbooks, anatomy atlases, medical dictionaries
, and lexica. This analysis reveals important concepts
, their hierarchical classification, and the instance attribute
values forming a complex semantic network. Our
system contains a hierarchical representation of basic anatomic
concepts such as bones, muscles, articulations, tendons
, as well as their parts and regions. The corpus contains
fragments of several anatomic textbooks describing global
concepts of the osteology, syndesmology, and myology as
well as descriptions of all the entities of these anatomic systems
within the lower leg and the pelvic girdle.
In order to present appropriate system reactions the event
control informs the knowledge server of user interactions.
First, exploiting the visual annotations, the knowledge
action
parameter
value
relevance
mouseOver
time
short
1
long
2
mouseButtonPressed
location
3D object
4
2D object
4
annotation
6
Table 1. Relevance of certain user interactions
Figure 6: Visualization of intermediate steps within a retrieval
which discovers the association between the distal phalanx of the
big toe and the extensor hallucis longus.
has-Basis
has-Area
has-Basis
Os-Longum
Bone-Basis
Musculus
Bone-Area
Phalanx distalis pedis
Basis phalangis distalis pedis
Area insertion of
M. extensor hallucis longus
M. extensor hallucis longus
1
2
3
1
2
3
Instance
Relation
Concept
170
server extracts co-referring formal entities and assigns relevance
values according to <A href="102.html#5">Table 1. Subsequently, the
knowledge server searches for associations between the
most relevant entities. Our system pursues two alternative
strategies: retrievals and suggestions.
Retrievals discover relations between entities by tracking
predefined paths within the knowledge base. <A href="102.html#5">Figure 6 illustrates
the intermediate steps in order to extract relations between
bones and muscles. From a functional point of view
(i.e. the muscle mechanics), bones are insertions or origins
of muscles. Its contraction produces force, which in turn
changes the orientation of these bones. These retrievals also
need to consider substructures (e.g. bone-volumes and
bone-area). The following retrieval extracts those muscles,
which originate in a given bone (first logic order):
The has-Part* relation represents the transitive hull over
several spatial part-of-relations <A href="102.html#7">[1] (e.g. the has-Basis relation
between Bones and Bone-Volumes and the has-Area relation
between Areas or Volumes and Areas).
These retrievals rely on knowledge about the structure of
the knowledge base. Moreover, they refer to a small number
of relevant objects. In many situations, however, the event
control comes up with a huge number of potential relevant
objects, which cannot easily be mapped to a predefined
query. Hence, we adopt a bottom-up search approach
within a complex semantic network developed within cognitive
psychology.
In his model of human comprehension <A href="102.html#8">[15] Quillian assumed
that spatially and temporally independent aspects of
human long-term memory are organized in a semantic network
. Furthermore, he assumed that cognitive processes
that access a node of the semantic network activate all connected
nodes in parallel. The term spreading activation refers
to a recursive propagation of initial stimuli. Nowadays,
this term subsumes breadth-first search algorithms for paths
connecting the nodes of a start and a destination set in directed
graphs satisfying an evaluation criterion. Collins and
Loftus <A href="102.html#8">[6] modify the propagation algorithm to consider activation
strength.
In our system, the knowledge server uses the objects' DOI
from the event control as an initial activation, which
spreads through the semantic network. These initial activa-tions
also take the content presented on textual labels and
inspected by the user into account. The spreading activation
approach generates a focus structure which contains information
how dominant graphical objects must be presented
in the schematic illustration <A href="102.html#8">[9]. <A href="102.html#6">Figure 7 illustrates how visual
dominance values control the render parameters.
REALIZATION DETAILS
The visual component of the prototypical implementation
extends the Open Inventor graphical library with powerful
scenegraph nodes to display hypertext on overlay regions
and to render semi-transparent shadow volumes.
Other nodes encapsulate the mirror and shading projection
onto the ground plane as well as the user interaction, and
emphasize techniques (e.g. computation of silhouette
lines). The layer management (see <A href="102.html#6">Figure 8) employs the
OpenGL polygon offset feature to allow graphics to overlap
specifically whereas visibility-tests of individual structures
are accomplished by offscreen rendering and analyzing
OpenGL p-buffers. Additional OSF Motif widgets
enable the user to add personalized annotations, which are
inserted into the knowledge base.
The knowledge base encodes both the media-independent
formal knowledge representation as well as media specific
realization statements using XML topic maps <A href="102.html#7">[2]. To process
this information, XML statements are transformed into
LISP-code. The authoring system contains export filters for
the NeoClassic and the LOOM <A href="102.html#8">[12] description logic inference
machine. In the current version it covers about
50 basic
anatomic
concepts,
70 relations,
and
over
1500 instances, with linguistic realization statements in
Latin, German and English. Furthermore, visual annotations
refer to a small number of geometric models and 2D
illustrations.
The interface between the knowledge server and the visual
component is described using CORBA's interface definition
language (IDL). The CORBA-based interface implementation
enables us to experiment with several knowledge serv-Figure
7: Application of different emphasize techniques to the
2D information representation according to decreasing dominance
values.
{ muscle | bone: Bone(bone)
Musculus(muscle)
(
is-Origin-of (bone, muscle)
part: has-Part*(bone, part)
is-Origin-of (part, muscle))
}
Figure 8: Multiple layers are used to place the different visual
information in order. Thereby, occluded details may be visible
in the shadow (3).
Outline
Highlight
Ordinary
Detail
Annotation
1
3
2
171
ers implemented in Common-LISP (LOOM) and C++
(NeoClassic).
APPLICATIONS
We applied the concept of Illustrative Shadows to an application
of medical education. This system had been previously
designed to foster the understanding of spatial relationships
by means of 3D models based on a virtual 3D
jigsaw approach <A href="102.html#8">[19]. While composing anatomical structures
has proven to help medical students to build an understanding
of the spatial composition <A href="102.html#8">[17], most of the users
expressed their desire for detailed information about functional
relations between structures. Consequently, students
would be able to playfully study human organs including
their spatial and functional correlations. Figures <A href="102.html#7">9 and <A href="102.html#7">10
depict the screen of the prototype in typical learning sessions
. Individual objects can be detached and moved within
the scene to expose occluded structures. In <A href="102.html#7">Figure 9, one of
the alternative visualization, the mirror, is shown to demonstrate
the various employments of the plane. It enables the
user to simultaneously look at two different views of the 3D
model. Graphical accentuations are used to attract the
viewer's attention (left atrium). Additional textual annotations
are assigned to correlating structures in a hypertext
representation which can be explored by following sepa-rately
marked links.
Another application that has not yet been realized is to support
users of CAD systems by Illustrative Shadows. CAD
systems are not only used to design individual components
in 3D but also to assemble complex systems. Information
about parameters of single components and relationships
are of major importance. Being able to retrieve this information
directly from the scene while interacting with the
components is of great benefit for the design engineer.
CONCLUSIONS
For educational, engineering, or maintenance purposes a
wealth of information about spatial and functional correlation
as well as textual information is required. In this paper
we developed a new metaphor-based approach to coher-ently
integrate different views onto such a complex information
space within an interactive system. Illustrative
Shadows provide an intuitive visual link and a level of integration
between interactive 3D graphics and supplemental
2D information displays that is hard to achieve with other
concepts.
Shadow projections have proven to be beneficial for perceiving
spatial relationships and to facilitate object positioning
. Thus, their use, if already present, occupies no additional
space for the display of additional information, or
in the case of prior absence, also add valuable depth cues to
the 3D visualization. The shadow projection onto a flat
plane enables schematic illustrations which are focused on
specific information extraction tasks and facilitates the integration
of generated textual information that leads to further
meaning. Thus, Illustrative Shadows promote the comprehension
of complex spatial relations and functional
correlations. Furthermore, the secondary information display
does not hinder manipulations of the 3D model. Our
approach is well suited for compact 3D models, and has
been successfully applied to an application of medical education
REFERENCES
1. Bernauer, J. Analysis of Part-Whole Relation and Subsumption
in the Medical Domain. Data & Knowledge
Engineering, 20(3):405415, October 1996.
2. Biezunski, M., Bryan, M., and Newcomb, S., editors.
ISO/IEC 13250:2000 Topic Maps: Information Technology
Document Description and Markup Language
. International Organization for Standarization
Figure 9: Alternative visualization. The Mirror facilitates manipulation
in 3D, by providing a better view of the structures. The
central representation is never occluded by annotations.
Figure 10: German annotation of objects previously selected.
Realization statements of the semantic network provide alternative
German, English, and Latin phrases referring to formal entities.
172
(ISO) and International Electrotechnical Commission
(IEC), December 1999. 1. Draft.
3. Blinn, J.F. Me and my (fake) shadow. IEEE Computer
Graphics & Applications, 8(1):8286, January/Febru-ary
1988.
4. Brinkley, J.F., Wong, B.A., Hinshaw, K.P., and Rosse,
C. Design of an Anatomy Information System. IEEE
Computer
Graphics &
Applications,
19(3):3848,
May/June 1999.
5. Cignoni, P., Montani, C., and Scopigno, R. Magic-sphere
: An insight tool for 3d data visualization. IEEE
Computer Graphics Forum, 13(3):317328, 1994.
6. Collins, A. and Loftus, E. A Spreading-Activation Theory
of Semantic Processing. Psychological Review,
82(6):407428, 1975.
7. Fishkin, K. and Stone, M.C. Enhanced dynamic queries
via moveable filters. In Katz, I.R., Mack, R., Marks, L.,
Rosson, M.B., and Nielsen, J., editors, Proc. of ACM
CHI Conference on Human Factors in Computing Systems
(Denver, May 1995), pages 415420. ACM Press,
New York, 1995.
8. Grosjean, J. and Coquillart, S. The magic mirror: A
metaphor for assisting the exploration of virtual worlds.
In Zara, J., editor, Proc. of Spring Conference on Computer
Graphics (Budmerice, Slovakia, April 1999),
pages 125129, 1999.
9. Hartmann, K., Schlechtweg, S., Helbing, R., and
Strothotte, T. Knowledge-Supported Graphical Illustration
of Texts. In De Marsico, M., Levialdi, S., and
Panizzi, E., editors, Proc. of the Working Conference on
Advanced Visual Interfaces (AVI 2002), pages 300307,
Trento, Italy, May 2002. ACM Press, New York.
10. Herndon, K.P., Zeleznik, R.C., Robbins, D.C., Conner,
D.B., Snibbe, S.S., and van Dam, A. Interactive shadows
. In Proc. of ACM Symposium on User Interface
and Software Technology (Monterey, November 1992),
pages 16. ACM Press, New York, 1992.
11. Hhne, K.H., Pflesser, B., Pommert, A., Riemer, M.,
Schiemann, T., Schubert, R., and Tiede, U. A virtual
body model for surgical education and rehearsal. IEEE
Computer, 29(1):2531, January 1996.
12. MacGregor, R. A Description Classifier for the Predicate
Calculus. In Hayes-Roth, B. and Korf, R., editors,
Proc. of the Twelfth Annual National Conference on
Artificial Intelligence (AAAI-94), pages 213220, Seattle
, Washington, August 1994. AAAI Press, Menlo
Park.
13. Moreno, R. and Mayer, R.E. Cognitive principles of
multimedia learning. Journal of Educational Psychology
, 91:358368, 1999.
14. Pommert, A., Hhne, K.H., Pflesser, B., Richter, E.,
Riemer, M., Schiemann, T., Schubert, R., Schumacher,
U., and Tiede, U. Creating a high-resolution spatial/
symbolic model of the inner organs based on the visible
human. Medical Image Analysis, 5(3):221228, 2001.
15. Quillian, M. Semantic Memory. In Minsky, M., editor,
Semantic Information Processing, chapter 4, pages
227270. MIT Press, Cambridge., 1968.
16. Rehm, K., Lakshminaryan, K., Frutiger, S., Schaper,
K.A., Sumners, D.W., Strother, S.C., Anderson, J.R.,
and Rottenberg, D.A. A symbolic environment for
visualizing activated foci in functional neuroimaging
datasets. Medical Image Analysis, 2(3):215226, ???
1998.
17. Ritter, F., Berendt, B., Fischer, B., Richter, R., and
Preim, B. Virtual 3d jigsaw puzzles: Studying the effect
of exploring spatial relations with implicit guidance. In
Herczeg, M., Prinz, W., and Oberquelle, H., editors,
Proc. of Mensch & Computer (Hamburg, September
2002), pages 363372, Stuttgart Leipzig Wiesbaden,
2002. B.G.Teubner.
18. Ritter, F., Deussen, O., Preim, B., and Strothotte, T.
Virtual 3d puzzles: A new method for exploring geometric
models in vr. IEEE Computer Graphics &
Applications, 21(5):1113, September/October 2001.
19. Ritter, F., Preim, B., Deussen, O., and Strothotte, T.
Using a 3d puzzle as a metaphor for learning spatial
relations. In Fels, S.S. and Poulin, P., editors, Proc. of
Graphics Interface (Montral, May 2000), pages 171
178. Morgan Kaufmann Publishers, San Francisco,
2000.
20. Seligmann, D.D. and Feiner, S. Automated generation
of intent-based 3d illustrations. In Proc. of ACM SIG-GRAPH
Conference on Computer Graphics (Las
Vegas, July 1991), pages 123132. ACM Press, New
York, 1991.
21. Stone, M.C., Fishkin, K., and Bier, E.A. The moveable
filter as a user interface tool. In Plaisant, C., editor,
Proc. of ACM CHI Conference on Human Factors in
Computing Systems (Boston, April 1994), pages 306
312. ACM Press, New York, 1994.
22. Strothotte, T. and Schlechtweg, S. Non-Photorealistic
Computer Graphics: Modeling, Rendering, and Animation
. Morgan Kaufmann Publishers, San Francisco,
2002.
23. Viega, J., Conway, M.J., Williams, G., and Pausch, R.
3d magic lenses. In Proc. of ACM Symposium on User
Interface and Software Technology (Seattle, November
1996), pages 5158. ACM Press, New York, 1996.
24. Wanger, L.R. The effect of shadow quality on the perception
of spatial relationships in computer generated
imagery. In Proc. of Symposium on Interactive 3D
Graphics (Cambridge, March 1992), pages 3942.
ACM Press, New York, 1992.
See also: http://isgwww.cs.uni-magdeburg.de/research/is/
173 | Information visualization;Spreading activation |
103 | Impedance Coupling in Content-targeted Advertising | The current boom of the Web is associated with the revenues originated from on-line advertising. While search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important . In this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising , from a computer science perspective. We assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser's business. Using no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness. Our methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for "ads and keywords") can yield gains in average precision figures of 60% compared to a trivial vector-based strategy. Further, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%. These are first results . They suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms. | INTRODUCTION
The emergence of the Internet has opened up new marketing
opportunities. In fact, a company has now the possibility
of showing its advertisements (ads) to millions of people at a
low cost. During the 90's, many companies invested heavily
on advertising in the Internet with apparently no concerns
about their investment return [16]. This situation radically
changed in the following decade when the failure of many
Web companies led to a dropping in supply of cheap venture
capital and a considerable reduction in on-line advertising
investments [15, 16].
It was clear then that more effective strategies for on-line
advertising were required. For that, it was necessary to take
into account short-term and long-term interests of the users
related to their information needs [9, 14]. As a consequence,
many companies intensified the adoption of intrusive techniques
for gathering information of users mostly without
their consent [8]. This raised privacy issues which stimu-lated
the research for less invasive measures [16].
More recently, Internet information gatekeepers as, for example
, search engines, recommender systems, and comparison
shopping services, have employed what is called paid
placement strategies [3].
In such methods, an advertiser
company is given prominent positioning in advertisement
lists in return for a placement fee. Amongst these methods,
the most popular one is a non-intrusive technique called keyword
targeted marketing [16]. In this technique, keywords
extracted from the user's search query are matched against
keywords associated with ads provided by advertisers. A
ranking of the ads, which also takes into consideration the
amount that each advertiser is willing to pay, is computed.
The top ranked ads are displayed in the search result page
together with the answers for the user query.
The success of keyword targeted marketing has motivated
information gatekeepers to offer their advertisement services
in different contexts. For example, as shown in Figure 1,
relevant ads could be shown to users directly in the pages of
information portals. The motivation is to take advantage of
496
the users immediate information interests at browsing time.
The problem of matching ads to a Web page that is browsed,
which we also refer to as content-targeted advertising [1],
is different from that of keyword marketing. In this case,
instead of dealing with users' keywords, we have to use the
contents of a Web page to decide which ads to display.
Figure 1: Example of content-based advertising in
the page of a newspaper. The middle slice of the
page shows the beginning of an article about the
launch of a DVD movie. At the bottom slice, we can
see advertisements picked for this page by Google's
content-based advertising system, AdSense.
It is important to notice that paid placement advertising
strategies imply some risks to information gatekeepers.
For instance, there is the possibility of a negative impact
on their credibility which, at long term, can demise their
market share [3]. This makes investments in the quality of
ad recommendation systems even more important to minimize
the possibility of exhibiting ads unrelated to the user's
interests. By investing in their ad systems, information gatekeepers
are investing in the maintenance of their credibility
and in the reinforcement of a positive user attitude towards
the advertisers and their ads [14]. Further, that can translate
into higher clickthrough rates that lead to an increase in
revenues for information gatekeepers and advertisers, with
gains to all parts [3].
In this work, we focus on the problem of content-targeted
advertising. We propose new strategies for associating ads
with a Web page. Five of these strategies are referred to as
matching strategies. They are based on the idea of matching
the text of the Web page directly to the text of the ads and
its associated keywords. Five other strategies, which we here
introduce, are referred to as impedance coupling strategies.
They are based on the idea of expanding the Web page with
new terms to facilitate the task of matching ads and Web
pages. This is motivated by the observation that there is frequently
a mismatch between the vocabulary of a Web page
and the vocabulary of an advertisement. We say that there
is a vocabulary impedance problem and that our technique
provides a positive effect of impedance coupling by reducing
the vocabulary impedance. Further, all our strategies rely
on information that is already available to information gatekeepers
that operate keyword targeted advertising systems.
Thus, no other data from the advertiser is required.
Using a sample of a real case database with over 93,000
ads and 100 Web pages selected for testing, we evaluate our
ad recommendation strategies. First, we evaluate the five
matching strategies. They match ads to a Web page using
a standard vector model and provide what we may call
trivial solutions. Our results indicate that a strategy that
matches the ad plus its keywords to a Web page, requiring
the keywords to appear in the Web page, provides improvements
in average precision figures of roughly 60% relative
to a strategy that simply matches the ads to the Web page.
Such strategy, which we call AAK (for "ads and keywords"),
is then taken as our baseline.
Following we evaluate the five impedance coupling strategies
. They are based on the idea of expanding the ad and
the Web page with new terms to reduce the vocabulary
impedance between their texts. Our results indicate that it
is possible to generate extra improvements in average precision
figures of roughly 50% relative to the AAK strategy.
The paper is organized as follows. In section 2, we introduce
five matching strategies to solve content-targeted
advertising. In section 3, we present our impedance coupling
strategies. In section 4, we describe our experimental
methodology and datasets and discuss our results. In section
5 we discuss related work. In section 6 we present our
conclusions.
MATCHING STRATEGIES
Keyword advertising relies on matching search queries to
ads and its associated keywords. Context-based advertising
, which we address here, relies on matching ads and its
associated keywords to the text of a Web page.
Given a certain Web page p, which we call triggering page,
our task is to select advertisements related to the contents
of p. Without loss of generality, we consider that an advertisement
a
i
is composed of a title, a textual description,
and a hyperlink. To illustrate, for the first ad by Google
shown in Figure 1, the title is "Star Wars Trilogy Full",
the description is "Get this popular DVD free. Free w/ free
shopping. Sign up now", and the hyperlink points to the site
"www.freegiftworld.com". Advertisements can be grouped
by advertisers in groups called campaigns, such that a campaign
can have one or more advertisements.
Given our triggering page p and a set A of ads, a simple
way of ranking a
i
A with regard to p is by matching the
contents of p to the contents of a
i
. For this, we use the vector
space model [2], as discussed in the immediately following.
In the vector space model, queries and documents are represented
as weighted vectors in an n-dimensional space. Let
w
iq
be the weight associated with term t
i
in the query q
and w
ij
be the weight associated with term t
i
in the document
d
j
. Then, q = (w
1q
, w
2q
, ..., w
iq
, ..., w
nq
) and d
j
=
(w
1j
, w
2j
, ..., w
ij
, ..., w
nj
) are the weighted vectors used to
represent the query q and the document d
j
. These weights
can be computed using classic tf-idf schemes. In such schemes,
weights are taken as the product between factors that quantify
the importance of a term in a document (given by the
term frequency, or tf, factor) and its rarity in the whole collection
(given by the inverse document factor, or idf, factor),
see [2] for details. The ranking of the query q with regard
to the document d
j
is computed by the cosine similarity
497
formula, that is, the cosine of the angle between the two
corresponding vectors:
sim(q, d
j
) =
q d
j
|q| |d
j
| =
P
n
i=1
w
iq
w
ij
qP
n
i=1
w
2
iq
qP
n
i=1
w
2
ij
(1)
By considering p as the query and a
i
as the document, we
can rank the ads with regard to the Web page p. This is our
first matching strategy. It is represented by the function AD
given by:
AD(p, a
i
) = sim(p, a
i
)
where AD stands for "direct match of the ad, composed by
title and description" and sim(p, a
i
) is computed according
to Eq. (1).
In our second method, we use other source of evidence
provided by the advertisers: the keywords. With each advertisement
a
i
an advertiser associates a keyword k
i
, which
may be composed of one or more terms. We denote the
association between an advertisement a
i
and a keyword k
i
as the pair (a
i
, k
i
) K, where K is the set of associations
made by the advertisers. In the case of keyword targeted
advertising, such keywords are used to match the ads to the
user queries. In here, we use them to match ads to the Web
page p. This provides our second method for ad matching
given by:
KW(p, a
i
) = sim(p, k
i
)
where (a
i
, k
i
) K and KW stands for "match the ad keywords"
.
We notice that most of the keywords selected by advertisers
are also present in the ads associated with those keywords
. For instance, in our advertisement test collection,
this is true for 90% of the ads. Thus, instead of using the
keywords as matching devices, we can use them to emphasize
the main concepts in an ad, in an attempt to improve our
AD strategy. This leads to our third method of ad matching
given by:
AD KW(p, a
i
) = sim(p, a
i
k
i
)
where (a
i
, k
i
) K and AD KW stands for "match the ad and
its keywords".
Finally, it is important to notice that the keyword k
i
associated
with a
i
could not appear at all in the triggering page
p, even when a
i
is highly ranked. However, if we assume that
k
i
summarizes the main topic of a
i
according to an advertiser
viewpoint, it can be interesting to assure its presence
in p. This reasoning suggests that requiring the occurrence
of the keyword k
i
in the triggering page p as a condition
to associate a
i
with p might lead to improved results. This
leads to two extra matching strategies as follows:
ANDKW(p, a
i
) =
sim(p, a
i
)
if k
i
p
0
if otherwise
AD ANDKW(p, a
i
) = AAK(p, a
i
) =
sim(p, a
i
k
i
)
if k
i
p
0
if otherwise
where (a
i
, k
i
) K, ANDKW stands for "match the ad keywords
and force their appearance", and AD ANDKW (or AAK for "ads
and keywords") stands for "match the ad, its keywords, and
force their appearance".
As we will see in our results, the best among these simple
methods is AAK. Thus, it will be used as baseline for our
impedance coupling strategies which we now discuss.
IMPEDANCE COUPLING STRATEGIES
Two key issues become clear as one plays with the content-targeted
advertising problem. First, the triggering page normally
belongs to a broader contextual scope than that of the
advertisements. Second, the association between a good advertisement
and the triggering page might depend on a topic
that is not mentioned explicitly in the triggering page.
The first issue is due to the fact that Web pages can be
about any subject and that advertisements are concise in
nature. That is, ads tend to be more topic restricted than
Web pages. The second issue is related to the fact that, as
we later discuss, most advertisers place a small number of
advertisements. As a result, we have few terms describing
their interest areas. Consequently, these terms tend to be
of a more general nature. For instance, a car shop probably
would prefer to use "car" instead of "super sport" to describe
its core business topic.
As a consequence, many specific
terms that appear in the triggering page find no match in
the advertisements. To make matters worst, a page might
refer to an entity or subject of the world through a label
that is distinct from the label selected by an advertiser to
refer to the same entity.
A consequence of these two issues is that vocabularies of
pages and ads have low intersection, even when an ad is
related to a page. We cite this problem from now on as
the vocabulary impedance problem. In our experiments, we
realized that this problem limits the final quality of direct
matching strategies. Therefore, we studied alternatives to
reduce the referred vocabulary impedance.
For this, we propose to expand the triggering pages with
new terms. Figure 2 illustrates our intuition. We already
know that the addition of keywords (selected by the advertiser
) to the ads leads to improved results. We say that a
keyword reduces the vocabulary impedance by providing an
alternative matching path. Our idea is to add new terms
(words) to the Web page p to also reduce the vocabulary
impedance by providing a second alternative matching path.
We refer to our expansion technique as impedance coupling.
For this, we proceed as follows.
expansion
terms
keyword
vocabulary impedance
triggering
page p
ad
Figure 2: Addition of new terms to a Web page to
reduce the vocabulary impedance.
An advertiser trying to describe a certain topic in a concise
way probably will choose general terms to characterize that
topic. To facilitate the matching between this ad and our
triggering page p, we need to associate new general terms
with p. For this, we assume that Web documents similar
to the triggering page p share common topics. Therefore,
498
by inspecting the vocabulary of these similar documents we
might find good terms for better characterizing the main
topics in the page p. We now describe this idea using a
Bayesian network model [10, 11, 13] depicted in Figure 3.
R
D
0
D
1
D
j
D
k
T
1
T
2
T
3
T
i
T
m
...
...
...
...
Figure 3: Bayesian network model for our impedance
coupling technique.
In our model, which is based on the belief network in [11],
the nodes represent pieces of information in the domain.
With each node is associated a binary random variable,
which takes the value 1 to mean that the corresponding entity
(a page or terms) is observed and, thus, relevant in our
computations. In this case, we say that the information was
observed. Node R represents the page r, a new representation
for the triggering page p. Let N be the set of the k
most similar documents to the triggering page, including the
triggering page p itself, in a large enough Web collection C.
Root nodes D
0
through D
k
represent the documents in N ,
that is, the triggering page D
0
and its k nearest neighbors,
D
1
through D
k
, among all pages in C. There is an edge
from node D
j
to node R if document d
j
is in N . Nodes
T
1
through T
m
represent the terms in the vocabulary of C.
There is an edge from node D
j
to a node T
i
if term t
i
occurs
in document d
j
. In our model, the observation of the pages
in N leads to the observation of a new representation of the
triggering page p and to a set of terms describing the main
topics associated with p and its neighbors.
Given these definitions, we can now use the network to
determine the probability that a term t
i
is a good term for
representing a topic of the triggering page p. In other words,
we are interested in the probability of observing the final
evidence regarding a term t
i
, given that the new representation
of the page p has been observed, P (T
i
= 1|R = 1).
This translates into the following equation
1
:
P (T
i
|R) =
1
P (R)
X
d
P (T
i
|d)P (R|d)P (d)
(2)
where d represents the set of states of the document nodes.
Since we are interested just in the states in which only a
single document d
j
is observed and P (d) can be regarded as
a constant, we can rewrite Eq. (2) as:
P (T
i
|R) =
P (R)
k
X
j=0
P (T
i
|d
j
)P (R|d
j
)
(3)
where d
j
represents the state of the document nodes in
which only document d
j
is observed and is a constant
1
To simplify our notation we represent the probabilities
P (X = 1) as P (X) and P (X = 0) as P (X).
associated with P (d
j
). Eq. (3) is the general equation to
compute the probability that a term t
i
is related to the triggering
page. We now define the probabilities P (T
i
|d
j
) and
P (R|d
j
) as follows:
P (T
i
|d
j
) = w
ij
(4)
P (R|d
j
)
=
(1 - )
j = 0
sim(r, d
j
)
1 j k
(5)
where is a normalizing constant, w
ij
is the weight associated
with term t
i
in the document d
j
, and sim(p, d
j
) is
given by Eq. (1), i.e., is the cosine similarity between p and
d
j
. The weight w
ij
is computed using a classic tf-idf scheme
and is zero if term t
i
does not occur in document d
j
. Notice
that P (T
i
|d
j
) = 1 - P (T
i
|d
j
) and P (R|d
j
) = 1 - P (R|d
j
).
By defining the constant , it is possible to determine how
important should be the influence of the triggering page p
to its new representation r. By substituting Eq. (4) and
Eq. (5) into Eq. (3), we obtain:
P (T
i
|R) = ((1 - ) w
i0
+
k
X
j=1
w
ij
sim(r, d
j
))
(6)
where = is a normalizing constant.
We use Eq. (6) to determine the set of terms that will
compose r, as illustrated in Figure 2. Let t
top
be the top
ranked term according to Eq. (6). The set r is composed
of the terms t
i
such that
P (T
i
|R)
P (T
top
|R)
, where is a given
threshold. In our experiments, we have used = 0.05. Notice
that the set r might contain terms that already occur
in p. That is, while we will refer to the set r as expansion
terms, it should be clear that p r = .
By using = 0, we simply consider the terms originally
in page p. By increasing , we relax the context of the page
p, adding terms from neighbor pages, turning page p into its
new representation r. This is important because, sometimes,
a topic apparently not important in the triggering page offers
a good opportunity for advertising. For example, consider
a triggering page that describes a congress in London about
digital photography. Although London is probably not an
important topic in this page, advertisements about hotels
in London would be appropriate. Thus, adding "hotels" to
page p is important. This suggests using > 0, that is,
preserving the contents of p and using the terms in r to
expand p.
In this paper, we examine both approaches. Thus, in our
sixth method we match r, the set of new expansion terms,
directly to the ads, as follows:
AAK T(p, a
i
) = AAK(r, a
i
)
where AAK T stands for "match the ad and keywords to the
set r of expansion terms".
In our seventh method, we match an expanded page p to
the ads as follows:
AAK EXP(p, a
i
) = AAK(p r, a
i
)
where AAK EXP stands for "match the ad and keywords to
the expanded triggering page".
499
To improve our ad placement methods, other external
source that we can use is the content of the page h pointed to
by the advertisement's hyperlink, that is, its landing page.
After all, this page comprises the real target of the ad and
perhaps could present a more detailed description of the
product or service being advertised. Given that the advertisement
a
i
points to the landing page h
i
, we denote this
association as the pair (a
i
, h
i
) H, where H is the set of
associations between the ads and the pages they point to.
Our eighth method consists of matching the triggering page
p to the landing pages pointed to by the advertisements, as
follows:
H(p, a
i
) = sim(p, h
i
)
where (a
i
, h
i
) H and H stands for "match the hyperlink
pointed to by the ad".
We can also combine this information with the more promising
methods previously described, AAK and AAK EXP as follows
. Given that (a
i
, h
i
) H and (a
i
, k
i
) K, we have our
last two methods:
AAK H(p, a
i
) =
sim(p, a
i
h
i
k
i
)
if k
i
p
0
if otherwise
AAK EXP H(p, a
i
) =
sim(p r, a
i
h
i
k
i
)
if k
i
(p r)
0
if otherwise
where AAK H stands for "match ads and keywords also considering
the page pointed by the ad" and AAH EXP H stands
for "match ads and keywords with expanded triggering page,
also considering the page pointed by the ad".
Notice that other combinations were not considered in this
study due to space restrictions. These other combinations
led to poor results in our experimentation and for this reason
were discarded.
EXPERIMENTS
To evaluate our ad placement strategies, we performed
a series of experiments using a sample of a real case ad
collection with 93,972 advertisements, 1,744 advertisers, and
68,238 keywords
2
. The advertisements are grouped in 2,029
campaigns with an average of 1.16 campaigns per advertiser.
For the strategies AAK T and AAK EXP, we had to generate
a set of expansion terms. For that, we used a database
of Web pages crawled by the TodoBR search engine [12]
(http://www.todobr.com.br/). This database is composed
of 5,939,061 pages of the Brazilian Web, under the domain
".br". For the strategies H, AAK H, and AAK EXP H, we also
crawled the pages pointed to by the advertisers. No other
filtering method was applied to these pages besides the removal
of HTML tags.
Since we are initially interested in the placement of advertisements
in the pages of information portals, our test collection
was composed of 100 pages extracted from a Brazilian
newspaper. These are our triggering pages. They were
crawled in such a way that only the contents of their articles
was preserved. As we have no preferences for particular
2
Data in portuguese provided by an on-line advertisement
company that operates in Brazil.
topics, the crawled pages cover topics as diverse as politics,
economy, sports, and culture.
For each of our 100 triggering pages, we selected the top
three ranked ads provided by each of our 10 ad placement
strategies. Thus, for each triggering page we select no more
than 30 ads. These top ads were then inserted in a pool
for that triggering page. Each pool contained an average of
15.81 advertisements. All advertisements in each pool were
submitted to a manual evaluation by a group of 15 users.
The average number of relevant advertisements per page
pool was 5.15. Notice that we adopted the same pooling
method used to evaluate the TREC Web-based collection [6].
To quantify the precision of our results, we used 11-point
average figures [2]. Since we are not able to evaluate the
entire ad collection, recall values are relative to the set of
evaluated advertisements.
4.2
Tuning Idf factors
We start by analyzing the impact of different idf factors
in our advertisement collection. Idf factors are important
because they quantify how discriminative is a term in the
collection. In our ad collection, idf factors can be computed
by taking ads, advertisers or campaigns as documents. To
exemplify, consider the computation of "ad idf" for a term
t
i
that occurs 9 times in a collection of 100 ads. Then, the
inverse document frequency of t
i
is given by:
idf
i
= log 100
9
Hence, we can compute ad, advertiser or campaign idf factors
. As we observe in Figure 4, for the AD strategy, the best
ranking is obtained by the use of campaign idf, that is, by
calculating our idf factor so that it discriminates campaigns.
Similar results were obtained for all the other methods.
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0
0.2
0.4
0.6
0.8
1
precision
recall
Campaign idf
Advertiser idf
Ad idf
Figure 4: Precision-recall curves obtained for the
AD strategy using ad, advertiser, and campaign idf
factors.
This reflects the fact that terms might be better discriminators
for a business topic than for an specific ad. This
effect can be accomplished by calculating the factor relative
to idf advertisers or campaigns instead of ads. In fact, campaign
idf factors yielded the best results. Thus, they will be
used in all the experiments reported from now on.
500
4.3
Results
Matching Strategies
Figure 5 displays the results for the matching strategies presented
in Section 2. As shown, directly matching the contents
of the ad to the triggering page (AD strategy) is not so
effective. The reason is that the ad contents are very noisy.
It may contain messages that do not properly describe the
ad topics such as requisitions for user actions (e.g, "visit our
site") and general sentences that could be applied to any
product or service (e.g, "we delivery for the whole country"
). On the other hand, an advertiser provided keyword
summarizes well the topic of the ad. As a consequence, the
KW strategy is superior to the AD and AD KW strategies. This
situation changes when we require the keywords to appear
in the target Web page. By filtering out ads whose keywords
do not occur in the triggering page, much noise is discarded.
This makes ANDKW a better alternative than KW. Further, in
this new situation, the contents of the ad becomes useful
to rank the most relevant ads making AD ANDKW (or AAK for
"ads and keywords") the best among all described methods.
For this reason, we adopt AAK as our baseline in the next set
of experiments.
0
0.1
0.2
0.3
0.4
0.5
0.6
0
0.2
0.4
0.6
0.8
1
precision
recall
AAK
ANDKW
KW
AD_KW
AD
Figure 5:
Comparison among our five matching
strategies. AAK ("ads and keywords") is superior.
Table 1 illustrates average precision figures for Figure 5.
We also present actual hits per advertisement slot. We call
"hit" an assignment of an ad (to the triggering page) that
was considered relevant by the evaluators. We notice that
our AAK strategy provides a gain in average precision of 60%
relative to the trivial AD strategy. This shows that careful
consideration of the evidence related to the problem does
pay off.
Impedance Coupling Strategies
Table 2 shows top ranked terms that occur in a page covering
Argentinean wines produced using grapes derived from
the Bordeaux region of France. The p column includes the
top terms for this page ranked according to our tf-idf weighting
scheme. The r column includes the top ranked expansion
terms generated according to Eq. (6). Notice that the
expansion terms not only emphasize important terms of the
target page (by increasing their weights) such as "wines" and
Methods
Hits
11-pt average
#1
#2
#3
total
score
gain(%)
AD
41
32
13
86
0.104
AD KW
51
28
17
96
0.106
+1.9
KW
46
34
28
108
0.125
+20.2
ANDKW
49
37
35
121
0.153
+47.1
AD ANDKW (AAK)
51
48
39
138
0.168
+61.5
Table 1: Average precision figures, corresponding to
Figure 5, for our five matching strategies. Columns
labelled #1, #2, and #3 indicate total of hits in
first, second, and third advertisement slots, respectively
. The AAK strategy provides improvements of
60% relative to the AD strategy.
Rank
p
r
term
score
term
score
1
argentina
0.090
wines
0.251
2
obtained*
0.047
wine*
0.140
3
class*
0.036
whites
0.091
4
whites
0.035
red*
0.057
5
french*
0.031
grape
0.051
6
origin*
0.029
bordeaux
0.045
7
france*
0.029
acideness*
0.038
8
grape
0.017
argentina
0.037
9
sweet*
0.016
aroma*
0.037
10
country*
0.013
blanc*
0.036
...
35
wines
0.010
...
Table 2: Top ranked terms for the triggering page
p according to our tf-idf weighting scheme and top
ranked terms for r, the expansion terms for p, generated
according to Eq. (6).
Ranking scores were
normalized in order to sum up to 1. Terms marked
with `*' are not shared by the sets p and r.
"whites", but also reveal new terms related to the main topic
of the page such as "aroma" and "red". Further, they avoid
some uninteresting terms such as "obtained" and "country".
Figure 6 illustrates our results when the set r of expansion
terms is used. They show that matching the ads to
the terms in the set r instead of to the triggering page p
(AAK T strategy) leads to a considerable improvement over
our baseline, AAK. The gain is even larger when we use the
terms in r to expand the triggering page (AAK EXP method).
This confirms our hypothesis that the triggering page could
have some interesting terms that should not be completely
discarded.
Finally, we analyze the impact on the ranking of using the
contents of pages pointed by the ads. Figure 7 displays our
results. It is clear that using only the contents of the pages
pointed by the ads (H strategy) yields very poor results.
However, combining evidence from the pages pointed by the
ads with our baseline yields improved results.
Most important
, combining our best strategy so far (AAK EXP) with
pages pointed by ads (AAK EXP H strategy) leads to superior
results. This happens because the two additional sources
of evidence, expansion terms and pages pointed by the ads,
are distinct and complementary, providing extra and valuable
information for matching ads to a Web page.
501
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0
0.2
0.4
0.6
0.8
1
precision
recall
AAK_EXP
AAK_T
AAK
Figure 6: Impact of using a new representation for
the triggering page, one that includes expansion
terms.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0
0.2
0.4
0.6
0.8
1
precision
recall
AAK_EXP_H
AAK_H
AAK
H
Figure 7: Impact of using the contents of the page
pointed by the ad (the hyperlink).
Figure 8 and Table 3 summarize all results described in
this section.
In Figure 8 we show precision-recall curves
and in Table 3 we show 11-point average figures. We also
present actual hits per advertisement slot and gains in average
precision relative to our baseline, AAK. We notice that
the highest number of hits in the first slot was generated by
the method AAK EXP. However, the method with best overall
retrieval performance was AAK EXP H, yielding a gain in
average precision figures of roughly 50% over the baseline
(AAK).
4.4
Performance Issues
In a keyword targeted advertising system, ads are assigned
at query time, thus the performance of the system is a very
important issue. In content-targeted advertising systems,
we can associate ads with a page at publishing (or updating
) time. Also, if a new ad comes in we might consider
assigning this ad to already published pages in offline mode.
That is, we might design the system such that its performance
depends fundamentally on the rate that new pages
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0
0.2
0.4
0.6
0.8
1
precision
recall
AAK_EXP_H
AAK_EXP
AAK_T
AAK_H
AAK
H
Figure 8:
Comparison among our ad placement
strategies.
Methods
Hits
11-pt average
#1
#2
#3
total
score
gain(%)
H
28
5
6
39
0.026
-84.3
AAK
51
48
39
138
0.168
AAK H
52
50
46
148
0.191
+13.5
AAK T
65
49
43
157
0.226
+34.6
AAK EXP
70
52
53
175
0.242
+43.8
AAK EXP H
64
61
51
176
0.253
+50.3
Table 3: Results for our impedance coupling strategies
.
are published and the rate that ads are added or modified.
Further, the data needed by our strategies (page crawling,
page expansion, and ad link crawling) can be gathered and
processed offline, not affecting the user experience. Thus,
from this point of view, the performance is not critical and
will not be addressed in this work.
RELATED WORK
Several works have stressed the importance of relevance
in advertising. For example, in [14] it was shown that advertisements
that are presented to users when they are not
interested on them are viewed just as annoyance.
Thus,
in order to be effective, the authors conclude that advertisements
should be relevant to consumer concerns at the
time of exposure. The results in [9] enforce this conclusion
by pointing out that the more targeted the advertising, the
more effective it is.
Therefore it is not surprising that other works have addressed
the relevance issue. For instance, in [8] it is proposed
a system called ADWIZ that is able to adapt online advertisement
to a user's short-term interests in a non-intrusive
way. Contrary to our work, ADWIZ does not directly use
the content of the page viewed by the user. It relies on search
keywords supplied by the user to search engines and on the
URL of the page requested by the user. On the other hand,
in [7] the authors presented an intrusive approach in which
an agent sits between advertisers and the user's browser allowing
a banner to be placed into the currently viewed page.
In spite of having the opportunity to use the page's content,
502
the agent infers relevance based on category information and
user's private information collected along the time.
In [5] the authors provide a comparison between the ranking
strategies used by Google and Overture for their keyword
advertising systems. Both systems select advertisements by
matching them to the keywords provided by the user in a
search query and rank the resulting advertisement list according
to the advertisers' willingness to pay. In particular
, Google approach also considers the clickthrough rate
of each advertisement as an additional evidence for its relevance
. The authors conclude that Google's strategy is better
than that used by Overture. As mentioned before, the ranking
problem in keyword advertising is different from that of
content-targeted advertising. Instead of dealing with keywords
provided by users in search queries, we have to deal
with the contents of a page which can be very diffuse.
Finally, the work in [4] focuses on improving search engine
results in a TREC collection by means of an automatic
query expansion method based on kNN [17]. Such method
resembles our expansion approach presented in section 3.
Our method is different from that presented by [4]. They
expand user queries applied to a document collection with
terms extracted from the top k documents returned as answer
to the query in the same collection. In our case, we
use two collections: an advertisement and a Web collection.
We expand triggering pages with terms extracted from the
Web collection and then we match these expanded pages to
the ads from the advertisement collection. By doing this, we
emphasize the main topics of the triggering pages, increasing
the possibility of associating relevant ads with them.
CONCLUSIONS
In this work we investigated ten distinct strategies for associating
ads with a Web page that is browsed (content-targeted
advertising).
Five of our strategies attempt to
match the ads directly to the Web page. Because of that,
they are called matching strategies. The other five strategies
recognize that there is a vocabulary impedance problem
among ads and Web pages and attempt to solve the problem
by expanding the Web pages and the ads with new terms.
Because of that they are called impedance coupling strategies
.
Using a sample of a real case database with over 93 thousand
ads, we evaluated our strategies. For the five matching
strategies, our results indicated that planned consideration
of additional evidence (such as the keywords provided by the
advertisers) yielded gains in average precision figures (for
our test collection) of 60%. This was obtained by a strategy
called AAK (for "ads and keywords"), which is taken as
the baseline for evaluating our more advanced impedance
coupling strategies.
For our five impedance coupling strategies, the results indicate
that additional gains in average precision of 50% (now
relative to the AAK strategy) are possible. These were generated
by expanding the Web page with new terms (obtained
using a sample Web collection containing over five million
pages) and the ads with the contents of the page they point
to (a hyperlink provided by the advertisers).
These are first time results that indicate that high quality
content-targeted advertising is feasible and practical.
ACKNOWLEDGEMENTS
This work was supported in part by the GERINDO project
, grant MCT/CNPq/CT-INFO 552.087/02-5, by CNPq
grant 300.188/95-1 (Berthier Ribeiro-Neto), and by CNPq
grant 303.576/04-9 (Edleno Silva de Moura). Marco Cristo
is supported by Fucapi, Manaus, AM, Brazil.
REFERENCES
[1] The Google adwords. Google content-targeted advertising.
http://adwords.google.com/select/ct_faq.html, November
2004.
[2] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information
Retrieval. Addison-Wesley-Longman, 1st edition, 1999.
[3] H. K. Bhargava and J. Feng. Paid placement strategies for
internet search engines. In Proceedings of the eleventh
international conference on World Wide Web, pages 117123.
ACM Press, 2002.
[4] E. P. Chan, S. Garcia, and S. Roukos. Trec-5 ad hoc retrieval
using k nearest-neighbors re-scoring. In The Fifth Text
REtrieval Conference (TREC-5). National Institute of
Standards and Technology (NIST), November 1996.
[5] J. Feng, H. K. Bhargava, and D. Pennock. Comparison of
allocation rules for paid placement advertising in search
engines. In Proceedings of the 5th international conference on
Electronic commerce, pages 294299. ACM Press, 2003.
[6] D. Hawking, N. Craswell, and P. B. Thistlewaite. Overview of
TREC-7 very large collection track. In The Seventh Text
REtrieval Conference (TREC-7), pages 91104, Gaithersburg,
Maryland, USA, November 1998.
[7] Y. Kohda and S. Endo. Ubiquitous advertising on the www:
merging advertisement on the browser. Comput. Netw. ISDN
Syst., 28(7-11):14931499, 1996.
[8] M. Langheinrich, A. Nakamura, N. Abe, T. Kamba, and
Y. Koseki. Unintrusive customization techniques for web
advertising. Comput. Networks, 31(11-16):12591272, 1999.
[9] T. P. Novak and D. L. Hoffman. New metrics for new media:
toward the development of web measurement standards. World
Wide Web J., 2(1):213246, 1997.
[10] J. Pearl. Probabilistic Reasoning in Intelligent Systems:
Networks of plausible inference. Morgan Kaufmann Publishers,
2nd edition, 1988.
[11] B. Ribeiro-Neto and R. Muntz. A belief network model for IR.
In Proceedings of the 19th Annual International ACM SIGIR
Conference on Research and Development in Information
Retrieval, pages 253260, Zurich, Switzerland, August 1996.
[12] A. Silva, E. Veloso, P. Golgher, B. Ribeiro-Neto, A. Laender,
and N. Ziviani. CobWeb - a crawler for the brazilian web. In
Proceedings of the String Processing and Information
Retrieval Symposium (SPIRE'99), pages 184191, Cancun,
Mexico, September 1999.
[13] H. Turtle and W. B. Croft. Evaluation of an inference
network-based retrieval model. ACM Transactions on
Information Systems, 9(3):187222, July 1991.
[14] C. Wang, P. Zhang, R. Choi, and M. Daeredita. Understanding
consumers attitude toward advertising. In Eighth Americas
Conference on Information Systems, pages 11431148, August
2002.
[15] M. Weideman. Ethical issues on content distribution to digital
consumers via paid placement as opposed to website visibility
in search engine results. In The Seventh ETHICOMP
International Conference on the Social and Ethical Impacts
of Information and Communication Technologies, pages
904915. Troubador Publishing Ltd, April 2004.
[16] M. Weideman and T. Haig-Smith. An investigation into search
engines as a form of targeted advert delivery. In Proceedings of
the 2002 annual research conference of the South African
institute of computer scientists and information technologists
on Enablement through technology, pages 258258. South
African Institute for Computer Scientists and Information
Technologists, 2002.
[17] Y. Yang. Expert network: Effective and efficient learning from
human decisions in text categorization and retrieval. In W. B.
Croft and e. C. J. van Rijsbergen, editors, Proceedings of the
17rd annual international ACM SIGIR conference on
Research and development in information retrieval, pages
1322. Springer-Verlag, 1994.
503
| ;advertisements;triggering page;Bayesian networks;Advertising;matching;kNN;Web;content-targeted advertising;impedance coupling |
104 | Implementing the IT Fundamentals Knowledge Area | The recently promulgated IT model curriculum contains IT fundamentals as one of its knowledge areas. It is intended to give students a broad understanding of (1) the IT profession and the skills that students must develop to become successful IT professionals and (2) the academic discipline of IT and its relationship to other disciplines. As currently defined, the IT fundamentals knowledge area requires 33 lecture hours to complete. The model curriculum recommends that the material relevant to the IT fundamentals knowledge area be offered early in the curriculum, for example in an introduction to IT course; however, many institutions will have to include additional material in an introductory IT course. For example, the Introduction of IT course at Georgia Southern University is used to introduce students to the available second disciplines (an important part of the Georgia Southern IT curriculum aimed at providing students with in-depth knowledge of an IT application domain), some productivity tools, and SQL. For many programs there may be too much material in an introductory IT course. This paper describes how Georgia Southern University resolved this dilemma. | INTRODUCTION
The recently promulgated IT Model Curriculum, available at
http://sigite.acm.org/activities/curriculum/, consists of 12 knowledge
areas including IT fundamentals (ITF). ITF is intended to
provide students with a set of foundation skills and provide an
overview of the discipline of IT and its relationship to other
computing disciplines. It is also intended to help students
understand the diverse contexts in which IT is used and the
challenges inherent in the diffusion of innovative technology.
Given its foundational nature, it will not come as a surprise that
the model curriculum recommends that ITF is covered early in a
student's program of study, and it seems most logical that this
knowledge area be covered in an introductory course in a
baccalaureate program in IT.
The IT Model curriculum recommends a minimum coverage of 33
lecture hours for the ITF knowledge area; however, a typical 3-credit
semester course gives an instructor, at most, 45 lecture
hours, and many programs will have to include additional material
in an introductory course. For example, an important element of
the IT program at Georgia Southern University is the inclusion of
second disciplines, coherent sets of 7 courses in an IT application
area, such as electronic broadcasting, law enforcement, music
technology, and supply chain management ([5], [6]). Since
students must begin introductory courses in their second
discipline relatively early in their academic program, it is
important that they be exposed to the range of second disciplines
available to them early, and the most appropriate place to do this
is in the introductory IT course. Also, students enrolling in the
introductory IT course at Georgia Southern are not expected to
have taken a computer literacy course beforehand, and it has
become clear that many are weak in the use of spreadsheets. Since
the program strongly believes that IT graduates can be expected to
be conversant with basic productivity tools, including
spreadsheets, the course must cover the basics of spreadsheet
application. Finally, the introductory IT course must also provide
a basic coverage of SQL, because the web design course, which
covers n-tier architectures and requires a basic knowledge of SQL,
is taught before the data management course in which SQL is
normally presented.
While the additional material that has to be covered in an
introductory IT course is likely to differ between institutions, it is
likely that many, if not all, IT programs will have to cover some
additional material. Given that ITF already requires 33 lecture
hours, considerable pressure is placed upon instructors in
introductory IT courses to cover both the ITF material and
whatever additional material needs to be included.
The intent of this paper is to describe how this particular dilemma
was resolved at Georgia Southern University. Section 2 provides
more details about the IT fundamentals knowledge area, while
section 3 discusses the introduction to IT course offered at
Georgia Southern University. Section 4 concludes.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
SIGITE 05, October 2022, 2005, Newark, NJ, USA.
Copyright 2005 ACM 1-59593-252-6/05/0010...$5.00.
1
THE IT FUNDAMENTALS KNOWLEDGE AREA
The IT Model Curriculum follows the example set by the
Computer Science model curriculum (http://www.acm.org/
education/curricula.html) and distinguishes between a number of
knowledge areas, each consisting of a number of knowledge units.
Knowledge units are themselves composed of topics and learning
outcomes. For reasons explained in ([4]), the IT model curriculum
differs from the computer science model curriculum in that it
distinguishes between core learning outcomes, which every
graduate from an IT program is expected to achieve, and elective
learning outcomes, which only graduates specializing in this area
are expected to achieve. Given the foundational nature of ITF, it
should come as no surprise that ITF only has core learning
outcomes associated with it.
Below are listed the knowledge units and the core learning
outcomes associated with each. The number behind each
knowledge unit is the minimum recommended coverage expressed
in lecture hours.
ITF1. Pervasive themes in IT (17)
1. Describe the components of IT systems and their
interrelationships.
2. Describe how complexity occurs in IT.
3. Recognize that an IT professional must know how to
manage complexity.
4. List examples of tools and methods used in IT for
managing complexity.
5. Describe the role of the IT professional as the user
advocate.
6. Explain why life-long learning and continued
professional development is critical for an IT
professional.
7. Explain why adaptability and interpersonal skills are
important to an IT professional.
8. Distinguish between data and information, and describe
the interrelationship.
9. Describe the importance of data and information in IT.
10. Explain why the mastery of information and
communication technologies is important to an IT
professional.
11. Explain why the IAS perspective needs to pervade all
aspects of IT.
ITF2. Organizational Issues (6)
1. Describe the elements of a feasible IT application.
2. Identify the extent and activities involved in an IT
application.
3. Understand the requirements of the business processes.
4. Outline the project management processes.
5. List the integration processes.
ITF3. History of IT (3)
1. Outline the history of computing technology.
2. Describe significant impacts of computing on society.
3. Describe significant changes in human-computer
interaction.
4. 4. Outline the history of the Internet.
ITF4. IT and its related and informing disciplines (3)
1. Define "Information Technology."
2. Describe the relationship between IT and other
computing disciplines.
3. Describe the relationship between IT and non-computing
disciplines.
4. Explain why mathematics and statistics are important in
IT.
ITF5. Application domains (2)
1. Describe the application of IT in non-computing
disciplines.
2. Describe how IT has impacted almost all aspects of
modern living.
3. Describe ways and extents in which IT has changed the
interaction and communication in our society.
4.
Describe how IT has impacted the globalization of
world economy, culture, political systems, health,
security, warfare, etc
.
ITF6. Application of math and statistics to IT (2)
1. Recognize the foundation of IT is built upon the various
aspects of mathematics.
2. Understand the number systems used in computation.
3. Explain data representation and encoding systems.
4. Describe the current encryption methods and their
limitations.
5. Describe the pervasive usage of mathematical concepts,
such as functions, relations, sets as well as basic logic
used in programming.
6. Recognize the value of probability and statistics.
7. Describe the basic data analysis concepts and methods
used in IT applications.
The total minimum recommended coverage thus is 33 lecture
hours.
THE INTRODUCTION TO IT COURSE AT GEORGIA SOUTHERN UNIVERSITY
The introduction to IT course (IT 1130) offered in the Department
of IT at Georgia Southern University is designed to introduce
students to IT as a discipline and cover some productivity tools,
namely Excel and Access. In line with all other IT courses at
Georgia Southern University, IT 1130 was formulated through a
set of explicit learning outcomes. The learning outcomes for IT
1130 are
1. Demonstrate a basic understanding of the field of IT,
including the ability to
i.
Define the term "Information Technology";
ii. Recognize the disciplines that have contributed to the
emergence of IT, namely computer science, information
systems, and computer engineering;
iii. Identify areas in which IT has significantly impacted
individuals, organizations and/or societies.
2. Demonstrate an understanding of basic information
technology software applications, including the ability to
i.
Using a given specification, create a simple database;
ii. Use SQL for simple queries;
iii. Use an office productivity suite.
The overlap between Objective 1 and the ITF Knowledge Area is
significant; however, due to Objective 2, the introductory IT
course at Georgia Southern must cover significant additional
material not specified in the IT fundamentals knowledge area.
2
3.2 Course Outline and its Mapping to the IT
Fundamentals Knowledge Area
The Introduction to IT course at Georgia Southern consists of 45
lecture hours. Teaching productivity tools, Learning Outcome 2
listed in Section 3.1, accounts for roughly 9 hours of instruction.
Exams conducted during the semester account for 3 hours of
instruction. This leaves 33 lecture hours to cover the remaining
topics for IT 1130 relating to Learning Outcome 1 listed in
Section 3.1. Table 1 provides a breakdown of the topics covered
in the remaining 33 hours of instruction, the number of lecture
hours spent on that topic, as well as the learning outcome in the
IT fundamentals knowledge area of the model curriculum to
which the topic corresponds.
TABLE 1: IT 1130 Topics and ITF Learning Outcomes
IT 1130 Topic
Objective # Hours
1 Define IT
ITF4.1
1
2 Data and Information
ITF1.8
ITF1.9
1
3 Components of IT Systems
Hardware
Software
Networks
User
ITF1.1 8.5
4 Core Technologies
Data Management
Networking
Web Systems
SAD
Programming
HCI
Specializations in
BSIT
ITF1.10
ITF2.1
ITF2.2
ITF2.3
ITF2.4
ITF2.5
8.5
5 Related Disciplines
ITF4.2
ITF4.3
ITF4.4
2
6 Application Domains
(Second Disciplines in BSIT)
ITF5.1
ITF5.2
ITF5.3
ITF 5.4
ITF 3.2
7
7 History of IT
ITF3.1
ITF3.4
1
8 Viruses, Crime, Law,
Ethics, Privacy & Security
ITF1.11
ITF 3.2
3
9 IT as a Profession
ITF1.5
ITF1.6
ITF1.7
ITF1.10
1
TOTAL 33
Table 2 compares the number of hours of instruction in the IT
1130 course for each of the knowledge units in the IT
fundamentals area to the minimum recommended number of
lecture hours listed in the model curriculum. The next section,
Section 3.3, discusses the discrepancies between the
recommended number of hours and the actual number of hours
taught.
TABLE 2: Comparison of IT 1130 to ITF Knowledge Area
ITF
Knowledge
Units
ITF
Recommended
IT 1130
Knowledge
Units Not
Covered
ITF1
17
14
1.2, 1.3, 1.4
ITF2 6 7.5
ITF3 3 2
3.3
ITF4 3 3
ITF5 2 6.5
ITF6 2 Not
Covered
6.1 6.7
TOTAL 33 33
3.3 Some Observations
Table 2 illustrates several noteworthy differences between the IT
1130 course at Georgia Southern University and the knowledge
units in the ITF knowledge area.
1. A discrepancy exists between the minimum number of hours
recommended for ITF1 (pervasive themes in IT) and the
number of hours taught in IT 1130. The 3 hour discrepancy
can be attributed to the lack of coverage in IT 1130 of
outcomes ITF1.2 4. Thus, IT 1130 provides no explicit
coverage of the reasons for the emergence of complexity in
IT, the need for IT professionals to handle complexity, and
the tools and techniques available to an IT professional in
IT1130. Instead, the IT program at Georgia Southern covers
complexity-related issues in a number of courses throughout
the curriculum. For example, some complexity-related issues
are discussed in a two-course sequence of Java programming
courses. Standards are discussed in a number of courses
throughout the curriculum, including a data communication
course and a web design course in which students learns how
to implement n-tier architectures. Finally, complexity related
issues are also covered in a capstone course on IT issues and
management. Since the need to manage complexity is
identified in the IT model curriculum as a pervasive theme,
this is a reasonable alternative to cover this issue.
2. The IT 1130 course devotes more lectures hours than the
minimum recommendation to ITF2 (organizational issues)
and ITF5 (application domains). As the recommendation is a
minimum, this is not problematic; however, it is worth noting
that the explanation for these discrepancies relates directly to
the structure of the IT major at Georgia Southern University.
IT majors are expected to take a number of core courses,
including courses in programming; web design; software
acquisition, implementation and integration; networking;
3
systems analysis and design; data management; and project
management. In addition, IT majors specialize in either
knowledge management and it integration, systems development
and support, telecommunications and network
administration, or web and multimedia foundations. It is
useful to students starting out on their academic program in
IT to receive information on the structure of the core of the
program, the courses that it consists of and how they relate to
each other, and on the different specializations available to
them. Since, for most IT majors, IT 1130 is the first course in
the program, it is the logical place to meet this aim. Clearly, a
full discussion of the structure of the program covers more
than just data management (ITF1.10), a broad overview of IT
applications (ITF2.1) and their development (ITF2.2),
systems analysis (ITF2.3), project management (ITF2.4), and
IT integration (ITF2.5). This explains why IT 1130 devotes
1.5 more hours than the recommended minimum 6.
Another important element of the IT program at Georgia
Southern is the inclusion of second disciplines. One of the
explicit program outcomes of the BS in IT program at
Georgia Southern is that, on graduation, graduates will be
able "to demonstrate sufficient understanding of an
application domain to be able to develop IT applications
suitable for that application domain." This outcome was
included at the recommendation of industry representatives
who were consulted when the IT program was designed ([5]).
For students to develop this ability, they must be exposed to
an IT application domain, and the BS IT program at Georgia
Southern therefore contains so-called second disciplines.
Second disciplines are coherent sets of 7 3-credit courses in
potential IT application domains, such as electronic
broadcasting, law enforcement, music technology, or supply
chain management. Students typically start taking courses in
their second discipline early in their program of study (the
standard program of study suggests that students take their
first second discipline course in the first semester of their
sophomore year). It is therefore important that students be
exposed to the different second disciplines available to them
early, and IT 1130 is the logical place to do so. One fortunate
side effect of the need to introduce a second discipline is that
it gives the program an excellent opportunity to make
students aware of the broad range of areas in which IT can be
applied and, hence, cover ITF5 (application domains);
however, since the number of second disciplines is large
(currently, 26), adequate coverage requires 4.5 hours more
than the minimum recommend coverage for ITF 5
(application domains)
3. One lecture hour is missing in ITF3 (history of IT) due to
lack of coverage in the IT 1130 course of significant changes
in HCI (ITF3.3). Some material relevant to this topic is
introduced in other courses that students tend to take early in
their program of study, such as the Introductory Java course
and the introductory web design course. For example, the
introductory web design course includes among its course
objectives that students develop the ability to design Web
pages in accordance with good design principles using
appropriate styles and formats and the ability to design Web
pages that are ADA compliant. Material relevant to both
objectives allows us to expand on HCI design principles and
place these in a historical context. Moreover, students are
advised to take the introductory web design course in the
semester following the one in which they take IT 1130, and
they are therefore likely to be exposed to material relevant to
ITF3.3 early in their program of study.
4. The final discrepancy lies in the coverage of the learning
outcomes corresponding to the ITF6 (application of math and
statistics to IT) in the IT 1130 course; however, the material
related to this knowledge unit is covered in two courses that
students are again advised to take early in their program of
study. One course is a course in discrete mathematics,
designed specifically for IT majors. It includes among its
course objectives the ability to explain the importance of
discrete mathematics in computer science and information
technology and provides in-depth coverage of functions, sets,
basic propositional logic, and algorithm design. Finally, all
students enrolled in the IT major take a statistics course,
which covers probability.
3.4 Support Material
Since the ITF knowledge area is relatively new, no single
textbook covers all relevant material. We therefore use a variety of
sources to support the course.
First, we use Excel 2003 ([8]) and Access 2003 ([7]) to support
the teaching of spreadsheets and SQL (IT 1130 course outcomes
2i-2iii identified in section 3.1).
Second, to support the teaching of Topics 3 (components of IT
systems), 4 (core technologies) and 7 (history of IT), we use
Discovering Computers 2005 ([9]). While the textbook provides a
reasonable coverage of some of the subtopics discussed, it does
not sufficiently stress the importance of the users and the
importance of HCI in systems development, and we, therefore,
emphasize this issue throughout the course. We discussed the way
in which we cover these topics in Points 2 and 3 in section 3.3.
Third, for topics 6 (Application Domains), 8 (Viruses, Crime,
Law, Ethics, Privacy and Security) and 9 (IT as a profession), we
use Computers in Our World ([3]); however, we do not rely solely
on the textbook for our coverage of topic 6. Again, we discussed
this in Point 2 in section 3.3.
Finally, to support Topics 1 (define IT), 2 (data and information),
and 5 (IT and its related disciplines), students are given material
written specifically for the course. Also, we invite representatives
from computer science and information systems to lecture on their
specific disciplines and follow this up with a lecture on computer
engineering and a discussion on the relationship between all four
disciplines.
Table 3 lists the core learning outcomes for each of the ITF
knowledge units and maps them to the material in the IT 1130
course used to achieve that outcome. The material comes either
from Discovering Computers 2005 ([9]) (DC), Computers in Our
World ([3]) (CIOW), or material written specifically for the
course (supplemental material) and/or lectures/discussions led by
faculty members from other related departments.
4
TABLE 3: Course Materials Used in IT 1130 to Achieve ITF
Learning Outcomes
ITF
Knowledge
Units
Learning
Outcomes
Material
1
DC Chapters 3-9
2-4 Not
covered
5-7
DC Chapters 12 & 15, CIOW
Chapters 8 & 9,
Supplemental Materials
8.9
DC Chapter 10,
Supplemental Materials
10
DC Chapters 2, 9, 10,12, 13,
Supplemental Materials
ITF 1
11
CIOW Chapters 7-9
ITF2
1-5
DC Chapters 2, 9, 10, 12, 13.
Supplemental Materials
1
DC Timeline between
Chapters 1 and 2, Chapter 2
2
CIOW Chapters 1 - 9
3 Not
covered
ITF3
4
DC Timeline between
Chapters 1 and 2, Chapter 2
ITF 4
1-4
Supplemental Materials,
Lecture and Class Discussion
led by CS, IS and IT
representatives
ITF5
1-4
CIOW Chapters 1 9
ITF6 1-7
Not
covered
*Discovering Computers = DC, Computers in Our World =
CIOW
CONCLUSIONS
The IT Fundamentals knowledge area in the IT model curriculum
is of central importance to the design of an introductory IT course;
however, since institutions will have to include additional
materials in their introductory IT courses, depending on the nature
of their program, the minimum requirement of 33 lecture hours to
cover this material is likely to lead to problems. This paper
presents the experience with an introductory IT course at Georgia
Southern University, IT1130. In general, we believe that, despite
the need to include additional material in IT1130, we are able to
cover most of the knowledge units in the IT fundamentals
knowledge area. We are confident that those knowledge units not
covered in IT1130 are covered in other courses that students are
advised to take early in their programs of study. Finally, despite
the fact that the IT fundamentals knowledge area is new and that
no textbooks cover all the knowledge units within the area, we
have been able to identify a set of textbooks that, jointly, cover
most of the material; however, we provide a relatively small
amount of additional material, and the textbooks we identified do
not always cover the material at the appropriate level. Therefore,
support materials specifically for the IT fundamentals knowledge
area need to be developed. Whether this is best provided in the
form of a textbook, or, more dynamically, as a set of online
learning objects ([1], [2]) is a question open to debate.
REFERENCES
[1] Abernethy, K., Treu, K, Piegari, G, Reichgelt. H. "An
implementation model for a learning object repository",
October 2005, E-learn 2005 World Conference on E-learning
in corporate, government, healthcare and higher
education. Vancouver, Canada.
[2] Abernethy, K., Treu, K, Piegari, G, Reichgelt. H. "A learning
object repository in support of introduction to information
technology", August 2005, 6
th
Annual Conference for the
Higher Education Academy Subject Network for Information
and Computer Science, York, England.
[3] Jedlicka, L. Computers in Our World. Thompson Course
Technologies, 2003.
[4] Lawson, E, Reichgelt, H, Lunt, B. Ekstrom, J, Kamali, R.
Miller, J and Gorka, S, The Information Technology Model
Curriculum. Paper submitted to ISECON 2005.
[5] Reichgelt, H., Price, B. and Zhang, A., "Designing an
Information Technology curriculum: The Georgia Southern
experience", Journal of Information Technology Education
2002, Vol. 1, No. 4, 213-221
[6] Reichgelt, H., Price, B. and Zhang, A., The Inclusion of
Application Areas in IT Curricula, SIGITE--3, Rochester,
NY, ACM-SIGITE (formerly SITE), September 2002
[7] Shelley, G., Cashman, T., Pratt, P. and Last, M. Microsoft
Office Access 2003. Thompson Course Technologies, 2004.
[8] Shelley, G., Cashman, T., Quasney, J. Microsoft Office Excel
2003. Thompson Course Technologies, 2004.
[9] Shelley, G., Vermaat, M. and Cashman, T. Discovering
Computers 2005: A Gateway to Information. Thompson
Course Technologies, 2005.
5 | IT Fundamentals Knowledge Area;IT Model Curriculum |
105 | Implicit User Modeling for Personalized Search | Information retrieval systems (e.g., web search engines) are critical for overcoming information overload. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance. For example, a tourist and a programmer may use the same word "java" to search for different information, but the current search systems would return the same results. In this paper, we study how to infer a user's interest from the user's search context and use the inferred implicit user model for personalized search . We present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval. We develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information. Experiments on web search show that our search agent can improve search accuracy over the popular Google search engine. | INTRODUCTION
Although many information retrieval systems (e.g., web search
engines and digital library systems) have been successfully deployed,
the current retrieval systems are far from optimal. A major deficiency
of existing retrieval systems is that they generally lack user
modeling and are not adaptive to individual users [17]. This inherent
non-optimality is seen clearly in the following two cases:
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
CIKM'05, October 31November 5, 2005, Bremen, Germany.
Copyright 2005 ACM 1-59593-140-6/05/0010 ...
$
5.00.
(1) Different users may use exactly the same query (e.g., "Java") to
search for different information (e.g., the Java island in Indonesia or
the Java programming language), but existing IR systems return the
same results for these users. Without considering the actual user, it
is impossible to know which sense "Java" refers to in a query. (2)
A user's information needs may change over time. The same user
may use "Java" sometimes to mean the Java island in Indonesia
and some other times to mean the programming language. Without
recognizing the search context, it would be again impossible to
recognize the correct sense.
In order to optimize retrieval accuracy, we clearly need to model
the user appropriately and personalize search according to each individual
user. The major goal of user modeling for information
retrieval is to accurately model a user's information need, which is,
unfortunately, a very difficult task. Indeed, it is even hard for a user
to precisely describe what his/her information need is.
What information is available for a system to infer a user's information
need? Obviously, the user's query provides the most direct
evidence. Indeed, most existing retrieval systems rely solely on
the query to model a user's information need. However, since a
query is often extremely short, the user model constructed based
on a keyword query is inevitably impoverished . An effective way
to improve user modeling in information retrieval is to ask the user
to explicitly specify which documents are relevant (i.e., useful for
satisfying his/her information need), and then to improve user modeling
based on such examples of relevant documents. This is called
relevance feedback, which has been proved to be quite effective for
improving retrieval accuracy [19, 20]. Unfortunately, in real world
applications, users are usually reluctant to make the extra effort to
provide relevant examples for feedback [11].
It is thus very interesting to study how to infer a user's information
need based on any implicit feedback information, which
naturally exists through user interactions and thus does not require
any extra user effort. Indeed, several previous studies have shown
that implicit user modeling can improve retrieval accuracy. In [3],
a web browser (Curious Browser) is developed to record a user's
explicit relevance ratings of web pages (relevance feedback) and
browsing behavior when viewing a page, such as dwelling time,
mouse click, mouse movement and scrolling (implicit feedback).
It is shown that the dwelling time on a page, amount of scrolling
on a page and the combination of time and scrolling have a strong
correlation with explicit relevance ratings, which suggests that implicit
feedback may be helpful for inferring user information need.
In [10], user clickthrough data is collected as training data to learn
a retrieval function, which is used to produce a customized ranking
of search results that suits a group of users' preferences. In [25],
the clickthrough data collected over a long time period is exploited
through query expansion to improve retrieval accuracy.
824
While a user may have general long term interests and preferences
for information, often he/she is searching for documents to
satisfy an "ad hoc" information need, which only lasts for a short
period of time; once the information need is satisfied, the user
would generally no longer be interested in such information. For
example, a user may be looking for information about used cars
in order to buy one, but once the user has bought a car, he/she is
generally no longer interested in such information. In such cases,
implicit feedback information collected over a long period of time
is unlikely to be very useful, but the immediate search context and
feedback information, such as which of the search results for the
current information need are viewed, can be expected to be much
more useful. Consider the query "Java" again. Any of the following
immediate feedback information about the user could potentially
help determine the intended meaning of "Java" in the query:
(1) The previous query submitted by the user is "hashtable" (as opposed
to, e.g., "travel Indonesia"). (2) In the search results, the user
viewed a page where words such as "programming", "software",
and "applet" occur many times.
To the best of our knowledge, how to exploit such immediate
and short-term search context to improve search has so far not been
well addressed in the previous work. In this paper, we study how to
construct and update a user model based on the immediate search
context and implicit feedback information and use the model to improve
the accuracy of ad hoc retrieval. In order to maximally benefit
the user of a retrieval system through implicit user modeling,
we propose to perform "eager implicit feedback". That is, as soon
as we observe any new piece of evidence from the user, we would
update the system's belief about the user's information need and
respond with improved retrieval results based on the updated user
model. We present a decision-theoretic framework for optimizing
interactive information retrieval based on eager user model updating
, in which the system responds to every action of the user by
choosing a system action to optimize a utility function. In a traditional
retrieval paradigm, the retrieval problem is to match a query
with documents and rank documents according to their relevance
values. As a result, the retrieval process is a simple independent
cycle of "query" and "result display". In the proposed new retrieval
paradigm, the user's search context plays an important role and the
inferred implicit user model is exploited immediately to benefit the
user. The new retrieval paradigm is thus fundamentally different
from the traditional paradigm, and is inherently more general.
We further propose specific techniques to capture and exploit two
types of implicit feedback information: (1) identifying related immediately
preceding query and using the query and the corresponding
search results to select appropriate terms to expand the current
query, and (2) exploiting the viewed document summaries to immediately
rerank any documents that have not yet been seen by the
user. Using these techniques, we develop a client-side web search
agent UCAIR (User-Centered Adaptive Information Retrieval) on
top of a popular search engine (Google). Experiments on web
search show that our search agent can improve search accuracy over
Google. Since the implicit information we exploit already naturally
exists through user interactions, the user does not need to make any
extra effort. Thus the developed search agent can improve existing
web search performance without additional effort from the user.
The remaining sections are organized as follows. In Section 2,
we discuss the related work. In Section 3, we present a decision-theoretic
interactive retrieval framework for implicit user modeling.
In Section 4, we present the design and implementation of an intelligent
client-side web search agent (UCAIR) that performs eager
implicit feedback. In Section 5, we report our experiment results
using the search agent. Section 6 concludes our work.
RELATED WORK
Implicit user modeling for personalized search has been studied
in previous work, but our work differs from all previous work
in several aspects: (1) We emphasize the exploitation of immediate
search context such as the related immediately preceding query
and the viewed documents in the same session, while most previous
work relies on long-term collection of implicit feedback information
[25]. (2) We perform eager feedback and bring the benefit of
implicit user modeling as soon as any new implicit feedback information
is available, while the previous work mostly exploits long-term
implicit feedback [10]. (3) We propose a retrieval framework
to integrate implicit user modeling with the interactive retrieval process
, while the previous work either studies implicit user modeling
separately from retrieval [3] or only studies specific retrieval models
for exploiting implicit feedback to better match a query with
documents [23, 27, 22]. (4) We develop and evaluate a personalized
Web search agent with online user studies, while most existing
work evaluates algorithms offline without real user interactions.
Currently some search engines provide rudimentary personalization
, such as Google Personalized web search [6], which allows
users to explicitly describe their interests by selecting from predefined
topics, so that those results that match their interests are
brought to the top, and My Yahoo! search [16], which gives users
the option to save web sites they like and block those they dislike
. In contrast, UCAIR personalizes web search through implicit
user modeling without any additional user efforts. Furthermore, the
personalization of UCAIR is provided on the client side. There are
two remarkable advantages on this. First, the user does not need to
worry about the privacy infringement, which is a big concern for
personalized search [26]. Second, both the computation of personalization
and the storage of the user profile are done at the client
side so that the server load is reduced dramatically [9].
There have been many works studying user query logs [1] or
query dynamics [13]. UCAIR makes direct use of a user's query
history to benefit the same user immediately in the same search
session. UCAIR first judges whether two neighboring queries belong
to the same information session and if so, it selects terms from
the previous query to perform query expansion.
Our query expansion approach is similar to automatic query expansion
[28, 15, 5], but instead of using pseudo feedback to expand
the query, we use user's implicit feedback information to expand
the current query. These two techniques may be combined.
OPTIMIZATION IN INTERACTIVE IR
In interactive IR, a user interacts with the retrieval system through
an "action dialogue", in which the system responds to each user action
with some system action. For example, the user's action may
be submitting a query and the system's response may be returning
a list of 10 document summaries. In general, the space of user actions
and system responses and their granularities would depend on
the interface of a particular retrieval system.
In principle, every action of the user can potentially provide new
evidence to help the system better infer the user's information need.
Thus in order to respond optimally, the system should use all the
evidence collected so far about the user when choosing a response.
When viewed in this way, most existing search engines are clearly
non-optimal. For example, if a user has viewed some documents on
the first page of search results, when the user clicks on the "Next"
link to fetch more results, an existing retrieval system would still
return the next page of results retrieved based on the original query
without considering the new evidence that a particular result has
been viewed by the user.
825
We propose to optimize retrieval performance by adapting system
responses based on every action that a user has taken, and cast
the optimization problem as a decision task. Specifically, at any
time, the system would attempt to do two tasks: (1) User model
updating: Monitor any useful evidence from the user regarding
his/her information need and update the user model as soon as such
evidence is available; (2) Improving search results: Rerank immediately
all the documents that the user has not yet seen, as soon
as the user model is updated. We emphasize eager updating and
reranking, which makes our work quite different from any existing
work. Below we present a formal decision theoretic framework for
optimizing retrieval performance through implicit user modeling in
interactive information retrieval.
3.1
A decision-theoretic framework
Let A be the set of all user actions and R(a) be the set of all
possible system responses to a user action a A. At any time, let
A
t
= (a
1
, ..., a
t
) be the observed sequence of user actions so far
(up to time point t) and R
t-1
= (r
1
, ..., r
t-1
) be the responses that
the system has made responding to the user actions. The system's
goal is to choose an optimal response r
t
R(a
t
) for the current
user action a
t
.
Let M be the space of all possible user models. We further define
a loss function L(a, r, m)
, where a A is a user action,
r R(a) is a system response, and m M is a user model.
L(a, r, m) encodes our decision preferences and assesses the optimality
of responding with r when the current user model is m
and the current user action is a. According to Bayesian decision
theory, the optimal decision at time t is to choose a response that
minimizes the Bayes risk, i.e.,
r
t
= argmin
rR(a
t
)
M
L(a
t
, r, m
t
)P (m
t
|U, D, A
t
, R
t-1
)dm
t
(1)
where P (m
t
|U, D, A
t
, R
t-1
) is the posterior probability of the
user model m
t
given all the observations about the user U we have
made up to time t.
To simplify the computation of Equation 1, let us assume that the
posterior probability mass P (m
t
|U, D, A
t
, R
t-1
) is mostly concentrated
on the mode m
t
= argmax
m
t
P (m
t
|U, D, A
t
, R
t-1
).
We can then approximate the integral with the value of the loss
function at m
t
. That is,
r
t
argmin
rR(a
t
)
L(a
t
, r, m
t
)
(2)
where m
t
= argmax
m
t
P (m
t
|U, D, A
t
, R
t-1
).
Leaving aside how to define and estimate these probabilistic models
and the loss function, we can see that such a decision-theoretic
formulation suggests that, in order to choose the optimal response
to a
t
, the system should perform two tasks: (1) compute the current
user model and obtain m
t
based on all the useful information
. (2) choose a response r
t
to minimize the loss function value
L(a
t
, r
t
, m
t
). When a
t
does not affect our belief about m
t
, the
first step can be omitted and we may reuse m
t-1
for m
t
.
Note that our framework is quite general since we can potentially
model any kind of user actions and system responses. In most
cases, as we may expect, the system's response is some ranking of
documents, i.e., for most actions a, R(a) consists of all the possible
rankings of the unseen documents, and the decision problem
boils down to choosing the best ranking of unseen documents based
on the most current user model. When a is the action of submitting
a keyword query, such a response is exactly what a current retrieval
system would do. However, we can easily imagine that a more intelligent
web search engine would respond to a user's clicking of
the "Next" link (to fetch more unseen results) with a more opti-mized
ranking of documents based on any viewed documents in
the current page of results. In fact, according to our eager updating
strategy, we may even allow a system to respond to a user's clicking
of browser's "Back" button after viewing a document in the same
way, so that the user can maximally benefit from implicit feedback.
These are precisely what our UCAIR system does.
3.2
User models
A user model m M represents what we know about the user
U , so in principle, it can contain any information about the user
that we wish to model. We now discuss two important components
in a user model.
The first component is a component model of the user's information
need. Presumably, the most important factor affecting the optimality
of the system's response is how well the response addresses
the user's information need. Indeed, at any time, we may assume
that the system has some "belief" about what the user is interested
in, which we model through a term vector x = (x
1
, ..., x
|V |
),
where V = {w
1
, ..., w
|V |
} is the set of all terms (i.e., vocabulary)
and x
i
is the weight of term w
i
. Such a term vector is commonly
used in information retrieval to represent both queries and documents
. For example, the vector-space model, assumes that both
the query and the documents are represented as term vectors and
the score of a document with respect to a query is computed based
on the similarity between the query vector and the document vector
[21]. In a language modeling approach, we may also regard
the query unigram language model [12, 29] or the relevance model
[14] as a term vector representation of the user's information need.
Intuitively, x would assign high weights to terms that characterize
the topics which the user is interested in.
The second component we may include in our user model is the
documents that the user has already viewed. Obviously, even if a
document is relevant, if the user has already seen the document, it
would not be useful to present the same document again. We thus
introduce another variable S D (D is the whole set of documents
in the collection) to denote the subset of documents in the search
results that the user has already seen/viewed.
In general, at time t, we may represent a user model as m
t
=
(S, x, A
t
, R
t-1
), where S is the seen documents, x is the system's
"understanding" of the user's information need, and (A
t
, R
t-1
)
represents the user's interaction history. Note that an even more
general user model may also include other factors such as the user's
reading level and occupation.
If we assume that the uncertainty of a user model m
t
is solely
due to the uncertainty of x, the computation of our current estimate
of user model m
t
will mainly involve computing our best estimate
of x. That is, the system would choose a response according to
r
t
= argmin
rR(a
t
)
L(a
t
, r, S, x
, A
t
, R
t-1
)
(3)
where x
= argmax
x
P (x|U, D, A
t
, R
t-1
). This is the decision
mechanism implemented in the UCAIR system to be described
later. In this system, we avoided specifying the probabilistic model
P (x|U, D, A
t
, R
t-1
) by computing x
directly with some existing
feedback method.
3.3
Loss functions
The exact definition of loss function L depends on the responses,
thus it is inevitably application-specific. We now briefly discuss
some possibilities when the response is to rank all the unseen documents
and present the top k of them. Let r = (d
1
, ..., d
k
) be the
top k documents, S be the set of seen documents by the user, and
x
be the system's best guess of the user's information need. We
826
may simply define the loss associated with r as the negative sum
of the probability that each of the d
i
is relevant, i.e., L(a, r, m) =
k
i=1
P (relevant|d
i
, m). Clearly, in order to minimize this
loss function, the optimal response r would contain the k documents
with the highest probability of relevance, which is intuitively
reasonable.
One deficiency of this "top-k loss function" is that it is not sensitive
to the internal order of the selected top k documents, so switching
the ranking order of a non-relevant document and a relevant one
would not affect the loss, which is unreasonable. To model ranking
, we can introduce a factor of the user model the probability
of each of the k documents being viewed by the user, P (view|d
i
),
and define the following "ranking loss function":
L(a, r, m) = k
i=1
P (view|d
i
)P (relevant|d
i
, m)
Since in general, if d
i
is ranked above d
j
(i.e., i < j), P (view|d
i
) >
P (view|d
j
), this loss function would favor a decision to rank relevant
documents above non-relevant ones, as otherwise, we could
always switch d
i
with d
j
to reduce the loss value. Thus the system
should simply perform a regular retrieval and rank documents
according to the probability of relevance [18].
Depending on the user's retrieval preferences, there can be many
other possibilities. For example, if the user does not want to see
redundant documents, the loss function should include some redundancy
measure on r based on the already seen documents S.
Of course, when the response is not to choose a ranked list of
documents, we would need a different loss function. We discuss
one such example that is relevant to the search agent that we implement
. When a user enters a query q
t
(current action), our search
agent relies on some existing search engine to actually carry out
search. In such a case, even though the search agent does not have
control of the retrieval algorithm, it can still attempt to optimize the
search results through refining the query sent to the search engine
and/or reranking the results obtained from the search engine. The
loss functions for reranking are already discussed above; we now
take a look at the loss functions for query refinement.
Let f be the retrieval function of the search engine that our agent
uses so that f (q) would give us the search results using query q.
Given that the current action of the user is entering a query q
t
(i.e.,
a
t
= q
t
), our response would be f (q) for some q. Since we have
no choice of f , our decision is to choose a good q. Formally,
r
t
= argmin
r
t
L(a, r
t
, m)
= argmin
f (q)
L(a, f (q), m)
= f (argmin
q
L(q
t
, f (q), m))
which shows that our goal is to find q
= argmin
q
L(q
t
, f (q), m),
i.e., an optimal query that would give us the best f (q). A different
choice of loss function L(q
t
, f (q), m) would lead to a different
query refinement strategy. In UCAIR, we heuristically compute q
by expanding q
t
with terms extracted from r
t-1
whenever q
t-1
and
q
t
have high similarity. Note that r
t-1
and q
t-1
are contained in
m as part of the user's interaction history.
3.4
Implicit user modeling
Implicit user modeling is captured in our framework through
the computation of x
= argmax
x
P (x|U, D, A
t
, R
t-1
), i.e., the
system's current belief of what the user's information need is. Here
again there may be many possibilities, leading to different algorithms
for implicit user modeling. We now discuss a few of them.
First, when two consecutive queries are related, the previous
query can be exploited to enrich the current query and provide more
search context to help disambiguation. For this purpose, instead of
performing query expansion as we did in the previous section, we
could also compute an updated x
based on the previous query and
retrieval results. The computed new user model can then be used to
rank the documents with a standard information retrieval model.
Second, we can also infer a user's interest based on the summaries
of the viewed documents. When a user is presented with a
list of summaries of top ranked documents, if the user chooses to
skip the first n documents and to view the (n + 1)-th document, we
may infer that the user is not interested in the displayed summaries
for the first n documents, but is attracted by the displayed summary
of the (n + 1)-th document. We can thus use these summaries as
negative and positive examples to learn a more accurate user model
x
. Here many standard relevance feedback techniques can be exploited
[19, 20]. Note that we should use the displayed summaries,
as opposed to the actual contents of those documents, since it is
possible that the displayed summary of the viewed document is
relevant, but the document content is actually not. Similarly, a displayed
summary may mislead a user to skip a relevant document.
Inferring user models based on such displayed information, rather
than the actual content of a document is an important difference
between UCAIR and some other similar systems.
In UCAIR, both of these strategies for inferring an implicit user
model are implemented.
UCAIR A PERSONALIZED SEARCH AGENT
In this section, we present a client-side web search agent called
UCAIR, in which we implement some of the methods discussed
in the previous section for performing personalized search through
implicit user modeling. UCAIR is a web browser plug-in
1
that
acts as a proxy for web search engines. Currently, it is only implemented
for Internet Explorer and Google, but it is a matter of
engineering to make it run on other web browsers and interact with
other search engines.
The issue of privacy is a primary obstacle for deploying any real
world applications involving serious user modeling, such as personalized
search. For this reason, UCAIR is strictly running as
a client-side search agent, as opposed to a server-side application.
This way, the captured user information always resides on the computer
that the user is using, thus the user does not need to release
any information to the outside. Client-side personalization also allows
the system to easily observe a lot of user information that may
not be easily available to a server. Furthermore, performing personalized
search on the client-side is more scalable than on the serverside
, since the overhead of computation and storage is distributed
among clients.
As shown in Figure 1, the UCAIR toolbar has 3 major components
: (1) The (implicit) user modeling module captures a user's
search context and history information, including the submitted
queries and any clicked search results and infers search session
boundaries. (2) The query modification module selectively improves
the query formulation according to the current user model.
(3) The result re-ranking module immediately re-ranks any unseen
search results whenever the user model is updated.
In UCAIR, we consider four basic user actions: (1) submitting a
keyword query; (2) viewing a document; (3) clicking the "Back"
button; (4) clicking the "Next" link on a result page. For each
of these four actions, the system responds with, respectively, (1)
1
UCAIR is available at: http://sifaka.cs.uiuc.edu/ir/ucair/download.html
827
Search
Engine
(e.g.,
Google)
Search History Log
(e.g.,past queries,
clicked results)
Query
Modification
Result
Re-Ranking
User
Modeling
Result Buffer
UCAIR
User
query
results
clickthrough...
Figure 1: UCAIR architecture
generating a ranked list of results by sending a possibly expanded
query to a search engine; (2) updating the information need model
x; (3) reranking the unseen results on the current result page based
on the current model x; and (4) reranking the unseen pages and
generating the next page of results based on the current model x.
Behind these responses, there are three basic tasks: (1) Decide
whether the previous query is related to the current query and if so
expand the current query with useful terms from the previous query
or the results of the previous query. (2) Update the information
need model x based on a newly clicked document summary. (3)
Rerank a set of unseen documents based on the current model x.
Below we describe our algorithms for each of them.
4.2
Session boundary detection and query expansion
To effectively exploit previous queries and their corresponding
clickthrough information, UCAIR needs to judge whether two adjacent
queries belong to the same search session (i.e., detect session
boundaries). Existing work on session boundary detection is
mostly in the context of web log analysis (e.g., [8]), and uses statistical
information rather than textual features. Since our client-side
agent does not have access to server query logs, we make session
boundary decisions based on textual similarity between two
queries. Because related queries do not necessarily share the same
words (e.g., "java island" and "travel Indonesia"), it is insufficient
to use only query text. Therefore we use the search results of the
two queries to help decide whether they are topically related. For
example, for the above queries "java island" and "travel Indone-sia"'
, the words "java", "bali", "island", "indonesia" and "travel"
may occur frequently in both queries' search results, yielding a high
similarity score.
We only use the titles and summaries of the search results to calculate
the similarity since they are available in the retrieved search
result page and fetching the full text of every result page would sig-nificantly
slow down the process. To compensate for the terseness
of titles and summaries, we retrieve more results than a user would
normally view for the purpose of detecting session boundaries (typ-ically
50 results).
The similarity between the previous query q and the current
query q is computed as follows. Let {s
1
, s
2
, . . . , s
n
} and
{s
1
, s
2
, . . . , s
n
} be the result sets for the two queries.
We use
the pivoted normalization TF-IDF weighting formula [24] to compute
a term weight vector s
i
for each result s
i
. We define the average
result s
avg
to be the centroid of all the result vectors, i.e.,
(s
1
+ s
2
+ . . . + s
n
)/n. The cosine similarity between the two
average results is calculated as
s
avg
s
avg
/ s
2
avg
s
2
avg
If the similarity value exceeds a predefined threshold, the two queries
will be considered to be in the same information session.
If the previous query and the current query are found to belong
to the same search session, UCAIR would attempt to expand the
current query with terms from the previous query and its search
results. Specifically, for each term in the previous query or the
corresponding search results, if its frequency in the results of the
current query is greater than a preset threshold (e.g. 5 results out
of 50), the term would be added to the current query to form an
expanded query. In this case, UCAIR would send this expanded
query rather than the original one to the search engine and return
the results corresponding to the expanded query. Currently, UCAIR
only uses the immediate preceding query for query expansion; in
principle, we could exploit all related past queries.
4.3
Information need model updating
Suppose at time t, we have observed that the user has viewed
k documents whose summaries are s
1
, ..., s
k
. We update our user
model by computing a new information need vector with a standard
feedback method in information retrieval (i.e., Rocchio [19]). According
to the vector space retrieval model, each clicked summary
s
i
can be represented by a term weight vector s
i
with each term
weighted by a TF-IDF weighting formula [21]. Rocchio computes
the centroid vector of all the summaries and interpolates it with the
original query vector to obtain an updated term vector. That is,
x = q + (1 - ) 1
k
k
i=1
s
i
where q is the query vector, k is the number of summaries the user
clicks immediately following the current query and is a parameter
that controls the influence of the clicked summaries on the inferred
information need model. In our experiments, is set to 0.5. Note
that we update the information need model whenever the user views
a document.
4.4
Result reranking
In general, we want to rerank all the unseen results as soon as the
user model is updated. Currently, UCAIR implements reranking in
two cases, corresponding to the user clicking the "Back" button
and "Next" link in the Internet Explorer. In both cases, the current
(updated) user model would be used to rerank the unseen results so
that the user would see improved search results immediately.
To rerank any unseen document summaries, UCAIR uses the
standard vector space retrieval model and scores each summary
based on the similarity of the result and the current user information
need vector x [21]. Since implicit feedback is not completely reliable
, we bring up only a small number (e.g. 5) of highest reranked
results to be followed by any originally high ranked results.
828
Google result (user query = "java map")
UCAIR result (user query ="java map")
previous query = "travel Indonesia"
previous query = "hashtable"
expanded user query = "java map Indonesia"
expanded user query = "java map class"
1
Java map projections of the world ...
Lonely Planet - Indonesia Map
Map (Java 2 Platform SE v1.4.2)
www.btinternet.com/ se16/js/mapproj.htm
www.lonelyplanet.com/mapshells/...
java.sun.com/j2se/1.4.2/docs/...
2
Java map projections of the world ...
INDONESIA TOURISM : CENTRAL JAVA - MAP
Java 2 Platform SE v1.3.1: Interface Map
www.btinternet.com/ se16/js/oldmapproj.htm
www.indonesia-tourism.com/...
java.sun.com/j2se/1.3/docs/api/java/...
3
Java Map
INDONESIA TOURISM : WEST JAVA - MAP
An Introduction to Java Map Collection Classes
java.sun.com/developer/...
www.indonesia-tourism.com/ ...
www.oracle.com/technology/...
4
Java Technology Concept Map
IndoStreets - Java Map
An Introduction to Java Map Collection Classes
java.sun.com/developer/onlineTraining/...
www.indostreets.com/maps/java/
www.theserverside.com/news/...
5
Science@NASA Home
Indonesia Regions and Islands Maps, Bali, Java, ...
Koders - Mappings.java
science.nasa.gov/Realtime/...
www.maps2anywhere.com/Maps/...
www.koders.com/java/
6
An Introduction to Java Map Collection Classes
Indonesia City Street Map,...
Hibernate simplifies inheritance mapping
www.oracle.com/technology/...
www.maps2anywhere.com/Maps/...
www.ibm.com/developerworks/java/...
7
Lonely Planet - Java Map
Maps Of Indonesia
tmap 30.map Class Hierarchy
www.lonelyplanet.com/mapshells/
www.embassyworld.com/maps/...
tmap.pmel.noaa.gov/...
8
ONJava.com: Java API Map
Maps of Indonesia by Peter Loud
Class Scope
www.onjava.com/pub/a/onjava/api map/
users.powernet.co.uk/...
jalbum.net/api/se/datadosen/util/Scope.html
9
GTA San Andreas : Sam
Maps of Indonesia by Peter Loud
Class PrintSafeHashMap
www.gtasanandreas.net/sam/
users.powernet.co.uk/mkmarina/indonesia/
jalbum.net/api/se/datadosen/...
10
INDONESIA TOURISM : WEST JAVA - MAP
indonesiaphoto.com
Java Pro - Union and Vertical Mapping of Classes
www.indonesia-tourism.com/...
www.indonesiaphoto.com/...
www.fawcette.com/javapro/...
Table 1: Sample results of query expansion
EVALUATION OF UCAIR
We now present some results on evaluating the two major UCAIR
functions: selective query expansion and result reranking based on
user clickthrough data.
5.1
Sample results
The query expansion strategy implemented in UCAIR is inten-tionally
conservative to avoid misinterpretation of implicit user models
. In practice, whenever it chooses to expand the query, the expansion
usually makes sense. In Table 1, we show how UCAIR can
successfully distinguish two different search contexts for the query
"java map", corresponding to two different previous queries (i.e.,
"travel Indonesia" vs. "hashtable"). Due to implicit user modeling,
UCAIR intelligently figures out to add "Indonesia" and "class",
respectively, to the user's query "java map", which would otherwise
be ambiguous as shown in the original results from Google
on March 21, 2005. UCAIR's results are much more accurate than
Google's results and reflect personalization in search.
The eager implicit feedback component is designed to immediately
respond to a user's activity such as viewing a document. In
Figure 2, we show how UCAIR can successfully disambiguate an
ambiguous query "jaguar" by exploiting a viewed document summary
. In this case, the initial retrieval results using "jaguar" (shown
on the left side) contain two results about the Jaguar cars followed
by two results about the Jaguar software. However, after the user
views the web page content of the second result (about "Jaguar
car") and returns to the search result page by clicking "Back" button
, UCAIR automatically nominates two new search results about
Jaguar cars (shown on the right side), while the original two results
about Jaguar software are pushed down on the list (unseen from the
picture).
5.2
Quantitative evaluation
To further evaluate UCAIR quantitatively, we conduct a user
study on the effectiveness of the eager implicit feedback component
. It is a challenge to quantitatively evaluate the potential performance
improvement of our proposed model and UCAIR over
Google in an unbiased way [7]. Here, we design a user study,
in which participants would do normal web search and judge a
randomly and anonymously mixed set of results from Google and
UCAIR at the end of the search session; participants do not know
whether a result comes from Google or UCAIR.
We recruited 6 graduate students for this user study, who have
different backgrounds (3 computer science, 2 biology, and 1 chem-<
;top>
<num> Number: 716
<title> Spammer arrest sue
<desc> Description: Have any spammers
been arrested or sued for sending unsolicited
e-mail?
<narr> Narrative: Instances of arrests,
prosecutions, convictions, and punishments
of spammers, and lawsuits against them are
relevant. Documents which describe laws to
limit spam without giving details of lawsuits
or criminal trials are not relevant.
</top>
Figure 3: An example of TREC query topic, expressed in a
form which might be given to a human assistant or librarian
istry). We use query topics from TREC
2
2004 Terabyte track [2]
and TREC 2003 Web track [4] topic distillation task in the way to
be described below.
An example topic from TREC 2004 Terabyte track appears in
Figure 3. The title is a short phrase and may be used as a query
to the retrieval system. The description field provides a slightly
longer statement of the topic requirement, usually expressed as a
single complete sentence or question. Finally the narrative supplies
additional information necessary to fully specify the requirement,
expressed in the form of a short paragraph.
Initially, each participant would browse 50 topics either from
Terabyte track or Web track and pick 5 or 7 most interesting topics.
For each picked topic, the participant would essentially do the normal
web search using UCAIR to find many relevant web pages by
using the title of the query topic as the initial keyword query. During
this process, the participant may view the search results and
possibly click on some interesting ones to view the web pages, just
as in a normal web search. There is no requirement or restriction
on how many queries the participant must submit or when the participant
should stop the search for one topic. When the participant
plans to change the search topic, he/she will simply press a button
2
Text REtrieval Conference: http://trec.nist.gov/
829
Figure 2: Screen shots for result reranking
to evaluate the search results before actually switching to the next
topic.
At the time of evaluation, 30 top ranked results from Google and
UCAIR (some are overlapping) are randomly mixed together so
that the participant would not know whether a result comes from
Google or UCAIR. The participant would then judge the relevance
of these results. We measure precision at top n (n = 5, 10, 20, 30)
documents of Google and UCAIR. We also evaluate precisions at
different recall levels.
Altogether, 368 documents judged as relevant from Google search
results and 429 documents judged as relevant from UCAIR by participants
. Scatter plots of precision at top 10 and top 20 documents
are shown in Figure 4 and Figure 5 respectively (The scatter plot
of precision at top 30 documents is very similar to precision at top
20 documents). Each point of the scatter plots represents the precisions
of Google and UCAIR on one query topic.
Table 2 shows the average precision at top n documents among
32 topics. From Figure 4, Figure 5 and Table 2, we see that the
search results from UCAIR are consistently better than those from
Google by all the measures. Moreover, the performance improvement
is more dramatic for precision at top 20 documents than that
at precision at top 10 documents. One explanation for this is that
the more interaction the user has with the system, the more clickthrough
data UCAIR can be expected to collect. Thus the retrieval
system can build more precise implicit user models, which lead to
better retrieval accuracy.
Ranking Method
prec@5
prec@10
prec@20
prec@30
Google
0.538
0.472
0.377
0.308
UCAIR
0.581
0.556
0.453
0.375
Improvement
8.0%
17.8%
20.2%
21.8%
Table 2: Table of average precision at top n documents for 32
query topics
The plot in Figure 6 shows the precision-recall curves for UCAIR
and Google, where it is clearly seen that the performance of UCAIR
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
UCAIR prec@10
Google prec@10
Scatterplot of Precision at Top 10 Documents
Figure 4: Precision at top 10 documents of UCAIR and Google
is consistently and considerably better than that of Google at all
levels of recall.
CONCLUSIONS
In this paper, we studied how to exploit implicit user modeling to
intelligently personalize information retrieval and improve search
accuracy. Unlike most previous work, we emphasize the use of immediate
search context and implicit feedback information as well
as eager updating of search results to maximally benefit a user. We
presented a decision-theoretic framework for optimizing interactive
information retrieval based on eager user model updating, in
which the system responds to every action of the user by choosing
a system action to optimize a utility function. We further propose
specific techniques to capture and exploit two types of implicit
feedback information: (1) identifying related immediately preceding
query and using the query and the corresponding search results
to select appropriate terms to expand the current query, and (2)
exploiting the viewed document summaries to immediately rerank
any documents that have not yet been seen by the user. Using these
techniques, we develop a client-side web search agent (UCAIR)
on top of a popular search engine (Google). Experiments on web
search show that our search agent can improve search accuracy over
830
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
UCAIR prec@20
Google prec@20
Scatterplot of Precision at Top 20 documents
Figure 5: Precision at top 20 documents of UCAIR and Google
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.4
0.45
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
recall
precision
Precision-Recall curves
Google Result
UCAIR Result
Figure 6: Precision at top 20 result of UCAIR and Google
Google. Since the implicit information we exploit already naturally
exists through user interactions, the user does not need to make any
extra effort. The developed search agent thus can improve existing
web search performance without any additional effort from the
user.
ACKNOWLEDGEMENT
We thank the six participants of our evaluation experiments. This
work was supported in part by the National Science Foundation
grants IIS-0347933 and IIS-0428472.
REFERENCES
[1] S. M. Beitzel, E. C. Jensen, A. Chowdhury, D. Grossman,
and O. Frieder. Hourly analysis of a very large topically
categorized web query log. In Proceedings of SIGIR 2004,
pages 321328, 2004.
[2] C. Clarke, N. Craswell, and I. Soboroff. Overview of the
TREC 2004 terabyte track. In Proceedings of TREC 2004,
2004.
[3] M. Claypool, P. Le, M. Waseda, and D. Brown. Implicit
interest indicators. In Proceedings of Intelligent User
Interfaces 2001, pages 3340, 2001.
[4] N. Craswell, D. Hawking, R. Wilkinson, and M. Wu.
Overview of the TREC 2003 web track. In Proceedings of
TREC 2003, 2003.
[5] W. B. Croft, S. Cronen-Townsend, and V. Larvrenko.
Relevance feedback and personalization: A language
modeling perspective. In Proeedings of Second DELOS
Workshop: Personalisation and Recommender Systems in
Digital Libraries, 2001.
[6] Google Personalized. http://labs.google.com/personalized.
[7] D. Hawking, N. Craswell, P. B. Thistlewaite, and D. Harman.
Results and challenges in web search evaluation. Computer
Networks, 31(11-16):13211330, 1999.
[8] X. Huang, F. Peng, A. An, and D. Schuurmans. Dynamic
web log session identification with statistical language
models. Journal of the American Society for Information
Science and Technology, 55(14):12901303, 2004.
[9] G. Jeh and J. Widom. Scaling personalized web search. In
Proceedings of WWW 2003, pages 271279, 2003.
[10] T. Joachims. Optimizing search engines using clickthrough
data. In Proceedings of SIGKDD 2002, pages 133142,
2002.
[11] D. Kelly and J. Teevan. Implicit feedback for inferring user
preference: A bibliography. SIGIR Forum, 37(2):1828,
2003.
[12] J. Lafferty and C. Zhai. Document language models, query
models, and risk minimization for information retrieval. In
Proceedings of SIGIR'01, pages 111119, 2001.
[13] T. Lau and E. Horvitz. Patterns of search: Analyzing and
modeling web query refinement. In Proceedings of the
Seventh International Conference on User Modeling (UM),
pages 145 152, 1999.
[14] V. Lavrenko and B. Croft. Relevance-based language
models. In Proceedings of SIGIR'01, pages 120127, 2001.
[15] M. Mitra, A. Singhal, and C. Buckley. Improving automatic
query expansion. In Proceedings of SIGIR 1998, pages
206214, 1998.
[16] My Yahoo! http://mysearch.yahoo.com.
[17] G. Nunberg. As google goes, so goes the nation. New York
Times, May 2003.
[18] S. E. Robertson. The probability ranking principle in i.
Journal of Documentation, 33(4):294304, 1977.
[19] J. J. Rocchio. Relevance feedback in information retrieval. In
The SMART Retrieval System: Experiments in Automatic
Document Processing, pages 313323. Prentice-Hall Inc.,
1971.
[20] G. Salton and C. Buckley. Improving retrieval performance
by retrieval feedback. Journal of the American Society for
Information Science, 41(4):288297, 1990.
[21] G. Salton and M. J. McGill. Introduction to Modern
Information Retrieval. McGraw-Hill, 1983.
[22] X. Shen, B. Tan, and C. Zhai. Context-sensitive information
retrieval using implicit feedback. In Proceedings of SIGIR
2005, pages 4350, 2005.
[23] X. Shen and C. Zhai. Exploiting query history for document
ranking in interactive information retrieval (Poster). In
Proceedings of SIGIR 2003, pages 377378, 2003.
[24] A. Singhal. Modern information retrieval: A brief overview.
Bulletin of the IEEE Computer Society Technical Committee
on Data Engineering, 24(4):3543, 2001.
[25] K. Sugiyama, K. Hatano, and M. Yoshikawa. Adaptive web
search based on user profile constructed without any effort
from users. In Proceedings of WWW 2004, pages 675684,
2004.
[26] E. Volokh. Personalization and privacy. Communications of
the ACM, 43(8):8488, 2000.
[27] R. W. White, J. M. Jose, C. J. van Rijsbergen, and
I. Ruthven. A simulated study of implicit feedback models.
In Proceedings of ECIR 2004, pages 311326, 2004.
[28] J. Xu and W. B. Croft. Query expansion using local and
global document analysis. In Proceedings of SIGIR 1996,
pages 411, 1996.
[29] C. Zhai and J. Lafferty. Model-based feedback in KL
divergence retrieval model. In Proceedings of the CIKM
2001, pages 403410, 2001.
831
| user model;interactive retrieval;personalized search;information retrieval systems;user modelling;implicit feedback;retrieval accuracy;clickthrough information;UCAIR |
106 | Improvements of TLAESA Nearest Neighbour Search Algorithm and Extension to Approximation Search | Nearest neighbour (NN) searches and k nearest neighbour (k-NN) searches are widely used in pattern recognition and image retrieval. An NN (k-NN) search finds the closest object (closest k objects) to a query object. Although the definition of the distance between objects depends on applications, its computation is generally complicated and time-consuming. It is therefore important to reduce the number of distance computations. TLAESA (Tree Linear Approximating and Eliminating Search Algorithm) is one of the fastest algorithms for NN searches. This method reduces distance computations by using a branch and bound algorithm. In this paper we improve both the data structure and the search algorithm of TLAESA. The proposed method greatly reduces the number of distance computations. Moreover, we extend the improved method to an approximation search algorithm which ensures the quality of solutions. Experimental results show that the proposed method is efficient and finds an approximate solution with a very low error rate. | Introduction
NN and k-NN searches are techniques which find the
closest object (closest k objects) to a query object
from a database. These are widely used in pattern
recognition and image retrieval. We can see examples
of their applications to handwritten character
recognition in (Rico-Juan & Mico 2003) and (Mico
& Oncina 1998), and so on. In this paper we consider
NN (k-NN) algorithms that can work in any metric
space. For any x, y, z in a metric space, the distance
function d(, ) satisfies the following properties:
d(x, y) = 0 x = y,
d(x, y) = d(y, x),
d(x, z) d(x, y) + d(y, z).
Although the definition of the distance depends on
applications, its calculation is generally complicated
and time-consuming. We particularly call the calculation
of d(, ) a distance computation.
Copyright c 2006, Australian Computer Society, Inc. This paper
appeared at Twenty-Ninth Australasian Computer Science
Conference (ACSC2006), Hobart, Tasmania, Australia, January
2006. Conferences in Research and Practice in Information
Technology, Vol. 48. Vladimir Estivill-Castro and Gill
Dobbie, Ed. Reproduction for academic, not-for profit purposes
permitted provided this text is included.
For the NN and k-NN searches in metric spaces,
some methods that can manage a large set of objects
efficiently have been introduced(Hjaltason &
Samet 2003). They are categorized into two groups.
The methods in the first group manage objects with
a tree structure such as vp-tree(Yianilos 1993), M-tree
(Ciaccia, Patella & Zezula 1997), sa-tree (Navarro
2002) and so forth. The methods in the second group
manage objects with a distance matrix, which stores
the distances between objects. The difference between
two groups is caused by their approaches to
fast searching. The former aims at reducing the com-putational
tasks in the search process by managing
objects effectively. The latter works toward reducing
the number of distance computations because generally
their costs are higher than the costs of other
calculations. In this paper we consider the latter approach
.
AESA (Approximating and Eliminating Search
Algorithm)(Vidal 1986) is one of the fastest algorithms
for NN searches in the distance matrix group.
The number of distance computations is bounded by
a constant, but the space complexity is quadratic.
LAESA (Linear AESA)(Mico, Oncina & Vidal 1994)
was introduced in order to reduce this large space
complexity. Its space complexity is linear and its
search performance is almost the same as that of
AESA. Although LAESA is more practical than
AESA, it is impractical for a large database because
calculations other than distance computations
increase. TLAESA (Tree LAESA)(Mico, Oncina &
Carrasco 1996) is an improvement of LAESA and reduces
the time complexity to sublinear. It uses two
kinds of data structures: a distance matrix and a binary
tree, called a search tree.
In this paper, we propose some improvements
of the search algorithm and the data structures of
TLAESA in order to reduce the number of distance
computations. The search algorithm follows the best
first algorithm. The search tree is transformed to a
multiway tree from a binary tree. We also improve
the selection method of the root object in the search
tree. These improvements are simple but very effective
. We then introduce the way to perform a k-NN
search in the improved TLAESA. Moreover, we propose
an extension to an approximation search algorithm
that can ensure the quality of solutions.
This paper is organized as follows. In section 2,
we describe the details of the search algorithm and
the data structures of TLAESA. In section 3, we propose
some improvements of TLAESA. In section 4,
we present an extension to an approximation search
algorithm. In section 5, we show some experimental
results. Finally, in section 6, we conclude this paper.
Figure 1: An example of the data structures in
TLAESA.
TLAESA
TLAESA uses two kinds of data structures: the distance
matrix and the search tree. The distance matrix
stores the distances from each object to some selected
objects. The search tree manages hierarchically all
objects. During the execution of the search algorithm,
the search tree is traversed and the distance matrix
is used to avoid exploring some branches.
2.1
Data Structures
We explain the data structures in TLAESA. Let P
be the set of all objects and B be a subset consisting
of selected objects called base prototypes. The distance
matrix M is a two-dimensional array that stores
the distances between all objects and base prototypes.
The search tree T is a binary tree such that each node
t corresponds to a subset S
t
P . Each node t has
a pointer to the representative object p
t
S
t
which
is called a pivot, a pointer to a left child node l, a
pointer to a right child node r and a covering radius
r
t
. The covering radius is defined as
r
t
= max
pS
t
d(p, p
t
).
(1)
The pivot p
r
of r is defined as p
r
= p
t
. On the other
hand, the pivot p
l
of l is determined so that
p
l
= argmax
pS
t
d(p, p
t
).
(2)
Hence, we have the following equality:
r
t
= d(p
t
, p
l
).
(3)
S
t
is partitioned into two disjoint subsets S
r
and S
l
as follows:
S
r
= {p S
t
|d(p, p
r
) < d(p, p
l
)},
S
l
= S
t
- S
r
.
(4)
Note that if t is a leaf node, S
t
= {p
t
} and r
t
= 0.
Fig. 1 shows an example of the data structures.
2.2
Construction of the Data Structures
We first explain the construction process of the search
tree T . The pivot p
t
of the root node t is randomly
selected and S
t
is set to P . The pivot p
l
of the left
child node and the covering radius r
t
are defined by
Eqs. (2) and (3). The pivot p
r
of the right child node
is set to p
t
. S
t
is partitioned into S
r
and S
l
by Eq.
(4). These operations are recursively repeated until
|S
t
| = 1.
The distance matrix M is constructed by selecting
base prototypes. This selection is important because
Figure 2: Lower bound.
base prototypes are representative objects which are
used to avoid some explorations of the tree.
The ideal selection of them is that each object is
as far away as possible from other objects. In (Mico
et al. 1994), a greedy algorithm is proposed for this
selection. This algorithm chooses an object that maximizes
the sum of distances from the other base prototypes
which have already been selected. In (Mico &
Oncina 1998), another algorithm is proposed, which
chooses an object that maximizes the minimum distance
to the preselected base prototypes. (Mico &
Oncina 1998) shows that the latter algorithm is more
effective than the former one. Thus, we use the later
algorithm for the selection of base prototypes.
The search efficiency depends not only on the selection
of base prototypes but also on the number
of them. There is a trade-off between the search
efficiency and the size of distance matrix, i.e. the
memory capacity. The experimental results in (Mico
et al. 1994) show that the optimal number of base
prototypes depends on the dimensionality dm of the
space. For example, the optimal numbers are 3, 16
and 24 if dm = 2, 4 and 8, respectively. The experimental
results also show that the optimal number
does not depend on the number of objects.
2.3
Search Algorithm
The search algorithm follows the branch and bound
strategy. It traverses the search tree T in the depth
first order. The distance matrix M is referred whenever
each node is visited in order to avoid unnecessary
traverse of the tree T . The distance are computed
only when a leaf node is reached.
Given a query object q, the distance between q and
the base prototypes are computed. These results are
stored in an array D. The object which is the closest
to q in B is selected as the nearest neighbour candidate
p
min
, and the distance d(q, p
min
) is recorded as
d
min
. Then, the traversal of the search tree T starts
at the root node. The lower bound for the left child
node l is calculated whenever each node t is reached if
it is not a leaf node. The lower bound of the distance
between q and an object x is defined as
g
x
= max
bB
|d(q, b) - d(b, x)|.
(5)
See Fig. 2. Recall that d(q, b) was precomputed before
the traversals and was stored in D. In addition,
the value d(b, x) was also computed during the construction
process and stored in the distance matrix
M . Therefore, g
x
is calculated without any actual
distance computations. The lower bound g
x
is not actual
distance d(q, x). Thus, it does not ensure that the
number of visited nodes in the search becomes minimum
. Though, this evaluation hardly costs, hence it
is possible to search fast. The search process accesses
the left child node l if g
p
l
g
p
r
, or the right child
node r if g
p
l
> g
p
r
. When a leaf node is reached,
the distance is computed and both p
min
and d
min
are
updated if the distance is less than d
min
.
q
p
min
p
t
r
t
S
t
Figure 3: Pruning Process.
procedure NN search(q)
1:
t root of T
2:
d
min
= , g
p
t
= 0
3:
for b B do
4:
D[b] = d(q, b)
5:
if D[b] < d
min
then
6:
p
min
= b, d
min
= D[b]
7:
end if
8:
end for
9:
g
p
t
= max
bB
|(D[b] - M [b, p
t
])|
10:
search(t, g
p
t
, q, p
min
, d
min
)
11:
return p
min
Figure 4: Algorithm for an NN search in TLAESA.
We explain the pruning process. Fig. 3 shows the
pruning situation. Let t be the current node. If the
inequality
d
min
+ r
t
< d(q, p
t
)
(6)
is satisfied, we can see that no object exists in S
t
which is closer to q than p
min
and the traversal to
node t is not necessary. Since g
p
t
d(q, p
t
), Eq. (6)
can be replaced with
d
min
+ r
t
< g
p
t
.
(7)
Figs.
4 and 5 show the details of the search
algorithm(Mico et al. 1996).
Improvements of TLAESA
In this section, we propose some improvements of
TLAESA in order to reduce the number of distance
computations.
3.1
Tree Structure and Search Algorithm
If we can evaluate the lower bounds g in the ascending
order of their values, the search algorithm runs very
fast. However, this is not guaranteed in TLAESA
since the evaluation order is decided according to the
tree structure. We show such an example in Fig. 6.
In this figure, u, v and w are nodes. If g
p
v
< g
p
w
,
it is desirable that v is evaluated before w. But, if
g
p
v
> g
p
u
, w might be evaluated before v.
We propose the use of a multiway tree and the
best first order search instead of a binary tree and
the depth first search. During the best first search
process, we can traverse preferentially a node whose
subset may contain the closest object. Moreover, we
can evaluate more nodes at one time by using of the
multiway tree. The search tree in TLAESA has many
nodes which have a pointer to the same object. In the
proposed structure, we treat such nodes as one node.
Each node t corresponds to a subset S
t
P and has
a pivot p
t
, a covering radius r
t
= max
pS
t
d(p, p
t
) and
pointers to its children nodes.
procedure search(t, g
p
t
, q, p
min
, d
min
)
1:
if t is a leaf then
2:
if g
p
t
< d
min
then
3:
d = d(q, p
t
) {distance computation}
4:
if d < d
min
then
5:
p
min
= p
t
, d
min
= d
6:
end if
7:
end if
8:
else
9:
r is a right child of t
10:
l is a left child of t
11:
g
p
r
= g
p
t
12:
g
p
l
= max
bB
|(D[b] - M [b, p
t
])|
13:
if g
p
l
< g
p
r
then
14:
if d
min
+ r
l
> g
p
l
then
15:
search(l, g
p
l
, p
min
, d
min
)
16:
end if
17:
if d
min
+ r
r
> g
p
r
then
18:
search(r, g
p
r
, p
min
, d
min
)
19:
end if
20:
else
21:
if d
min
+ r
r
> g
p
r
then
22:
search(r, g
p
r
, p
min
, d
min
)
23:
end if
24:
if d
min
+ r
l
> g
p
l
then
25:
search(l, g
p
l
, p
min
, d
min
)
26:
end if
27:
end if
28:
end if
Figure 5: A recursive procedure for an NN search in
TLAESA.
Figure 6: A case in which the search algorithm in
TLAESA does not work well.
We show a method to construct the tree structure
in Fig. 7. We first select randomly the pivot p
t
of
the root node t and set S
t
to P . Then we execute the
procedure makeTree(t, p
t
, S
t
) in Fig. 7.
We explain the search process in the proposed
structure. The proposed method maintains a priority
queue Q that stores triples (node t, lower bound g
p
t
,
covering radius r
t
) in the increasing order of g
p
t
- r
t
.
Given a query object q, we calculate the distances between
q and base prototypes and store their values in
D. Then the search process starts at the root of T .
The following steps are recursively repeated until Q
becomes empty. When t is a leaf node, the distance
d(q, p
t
) is computed if g
p
t
< d
min
. If t is not a leaf
node and its each child node t satisfies the inequality
g
p
t
< r
t
+ d
min
,
(8)
the lower bound g
p
t
is calculated and a triple
(t , g
p
t
, r
t
) is added to Q. Figs. 8 and 9 show the
details of the algorithm.
procedure makeTree(t, p
t
, S
t
)
1:
t new child node of t
2:
if |S
t
| = 1 then
3:
p
t
= p
t
and S
t
= {p
t
}
4:
else
5:
p
t
= argmax
pS
t
d(p, p
t
)
6:
S
t
= {p S
t
|d(p, p
t
) < d(p, p
t
)}
7:
S
t
= S
t
- S
t
8:
makeTree(t , p
t
, S
t
)
9:
makeTree(t, p
t
, S
t
)
10:
end if
Figure 7: Method to construct the proposed tree
structure.
procedure NN search(q)
1:
t root of T
2:
d
min
= , g
p
t
= 0
3:
for b B do
4:
D[b] = d(q, b)
5:
if D[b] < d
min
then
6:
p
min
= b, d
min
= D[b]
7:
end if
8:
end for
9:
g
t
= max
bB
|(D[b] - M [b, p
t
])|
10:
Q {(t, g
p
t
, r
t
)}
11:
while Q is not empty do do
12:
(t, g
p
t
, r
t
) element in Q
13:
search(t, g
p
t
, q, p
min
, d
min
)
14:
end while
15:
return p
min
Figure 8: Proposed algorithm for an NN search.
3.2
Selection of Root Object
We focus on base prototypes in order to reduce node
accesses. The lower bound of the distance between a
query q and a base prototype b is
g
b
= max
bB
|d(q, b) - d(b, b)|
= d(q, b).
This value is not an estimated distance but an actual
distance.
If we can use an actual distance in the search process
, we can evaluate more effectively which nodes
are close to q. This fact means that the search is efficiently
performed if many base prototypes are visited
in the early stage. In other words, it is desirable that
more base prototypes are arranged in the upper part
of the search tree. Thus, in the proposed algorithm,
we choose the first base prototype b
1
as the root object
.
3.3
Extension to a k-NN Search
LAESA was developed to perform NN searches and
(Moreno-Seco, Mico & Oncina 2002) extended it so
that k-NN searches can be executed. In this section,
we extend the improved TLAESA to a k-NN search
algorithm. The extension is simple modifications of
the algorithm described above. We use a priority
queue V for storing k nearest neighbour candidates
and modify the definition of d
min
. V stores pairs
(object p, distance d(q, p)) in the increasing order of
procedure search(t, g
p
t
, q, p
min
, d
min
)
1:
if t is a leaf then
2:
if g
p
t
< d
min
then
3:
d = d(q, p
t
) {distance computation}
4:
if d < d
min
then
5:
p
min
= p
t
, d
min
= d
6:
end if
7:
end if
8:
else
9:
for each child t of t do
10:
if g
p
t
< r
t
+ d
min
then
11:
g
p
t
= max
bB
|(D[b] - M [b, p
t
])|
12:
Q Q {(t , g
p
t
, r
t
)}
13:
end if
14:
end for
15:
end if
Figure 9: A procedure used in the proposed algorithm
for an NN search.
procedure k-NN search(q, k)
1:
t root of T
2:
d
min
= , g
p
t
= 0
3:
for b B do
4:
D[b] = d(q, b)
5:
if D[b] < d
min
then
6:
V V {(b, D[b])}
7:
if |V | = k + 1 then
8:
remove (k + 1)th pair from V
9:
end if
10:
if |V | = k then
11:
(c, d(q, c)) kth pair of V
12:
d
min
= d(q, c)
13:
end if
14:
end if
15:
end for
16:
g
p
t
= max
bB
|(D[b] - M [b, p
t
])|
17:
Q {(t, g
p
t
, r
t
)}
18:
while Q is not empty do
19:
(t, g
p
t
, r
t
) element in Q
20:
search(t, g
p
t
, q, V, d
min
, k)
21:
end while
22:
return k objects V
Figure 10: Proposed algorithm for a k-NN search.
d(q, p). d
min
is defined as
d
min
=
(|V | < k)
d(q, c) (|V | = k)
(9)
where c is the object of the kth pair in V .
We show in Figs. 10 and 11 the details of the k-NN
search algorithm. The search strategy essentially
follows the algorithm in Figs. 8 and 9, but the k-NN
search algorithm uses V instead of p
min
.
(Moreno-Seco et al. 2002) shows that the optimal
number of base prototypes depends on not only the
dimensionality of the space but also the value of k and
that the number of distance computations increases
as k increases.
Extension to an Approximation Search
In this section, we propose an extension to an approximation
k-NN search algorithm which ensures the
procedure search(t, g
p
t
, q, V, d
min
, k)
1:
if t is a leaf then
2:
if g
p
t
< d
min
then
3:
d = d(q, p
t
) {distance computation}
4:
if d < d
min
then
5:
V V {(p
t
, d(q, p
t
))}
6:
if |V | = k + 1 then
7:
remove (k + 1)th pair from V
8:
end if
9:
if |V | = k then
10:
(c, d(q, c)) kth pair of V
11:
d
min
= d(q, c)
12:
end if
13:
end if
14:
end if
15:
else
16:
for each child t of t do
17:
if g
p
t
< r
t
+ d
min
then
18:
g
p
t
= max
bB
|(D[b] - M [b, p
t
])|
19:
Q Q {(t , g
p
t
, r
t
)}
20:
end if
21:
end for
22:
end if
Figure 11: A procedure used in the proposed algorithm
for a k-NN search.
quality of solutions. Consider the procedure in Fig.
11. We replace the 4th line with
if d < d
min
then
and the 17th line with
if g
t
< r
t
+ d
min
then
where is real number such that 0 < 1. The
pruning process gets more efficient as these conditions
become tighter.
The proposed method ensures the quality of solutions
. We can show the approximation ratio to an
optimal solution using . Let a be the nearest neighbour
object and a be the nearest neighbour candidate
object. If our method misses a and give a as
the answer, the equation
g(q, a) d(q, a )
(10)
is satisfied. Then a will be eliminated from targeted
objects. Since g(q, a) d(q, a), we can obtain the
following equation:
d(q, a ) 1
d(q, a).
(11)
Thus, the approximate solution are suppressed by
1
times of the optimal solution.
Experiments
In this section we show some experimental results and
discuss them. We tested on an artificial set of random
points in the 8-dimensional euclidean space. We also
used the euclidean distance as the distance function.
We evaluated the number of distance computations
and the number of accesses to the distance matrix in
1-NN and 10-NN searches.
0
50
100
150
200
250
300
0 10 20 30 40 50 60 70 80 90 100 110 120
Number of Distance Computations
Number of Base Prototypes
TLAESA(1-NN)
TLAESA(10-NN)
Proposed(1-NN)
Proposed(10-NN)
Figure 12: Relation of the number of distance computations
to the number of base prototypes.
1-NN
10-NN
TLAESA
40
80
Proposed
25
60
Table 1: The optimal number of base prototypes.
5.1
The Optimal Number of Base Prototypes
We first determined experimentally the optimal number
of base prototypes.
The number of objects
was fixed to 10000. We executed 1-NN and 10-NN
searches for various numbers of base prototypes, and
counted the number of distance computations. Fig.
12 shows the results. From this figure, we chose the
number of base prototypes as shown in Table. 1.
We can see that the values in the proposed method
are fewer than those in TLAESA. This means that
the proposed method can achieve better performance
with smaller size of distance matrix. We used the
values in Table. 1 in the following experiments.
5.2
Evaluation of Improvements
We tested the effects of our improvements described
in 3.1 and 3.2. We counted the numbers of distance
computations in 1-NN and 10-NN searches for various
numbers of objects. The results are shown in Figs.
13 and 14. Similar to TLAESA, the number of the
distance computations in the proposed method does
not depend on the number of objects. In both of 1-NN
and 10-NN searches, it is about 60% of the number of
distance computations in TLAESA. Thus we can see
that our improvements are very effective.
In the search algorithms of TLAESA and the proposed
methods, various calculations are performed
other than distance computations. The costs of the
major part of such calculations are proportional to
the number of accesses to the distance matrices. We
therefore counted the numbers of accesses to the distance
matrices. We examined the following two cases:
(i) TLAESA vs. TLAESA with the improvement of
selection of the root object.
(ii) Proposed method only with improvement of tree
structure and search algorithm vs.
proposed
method only with the improvement of selection
of the root object.
In the case (i), the number of accesses to the distance
matrix is reduced by 12% in 1-NN searches and 4.5%
in 10-NN searches. In the case (ii), it is reduced by
6.8% in 1-NN searches and 2.7% in 10-NN searches.
0
10
20
30
40
50
60
70
80
90
100
0
2000 4000
6000 8000 10000
Number of Distance Computations
Number of Objects
TLAESA
Proposed
Figure 13: The number of distance computations in
1-NN searches.
0
30
60
90
120
150
180
210
240
270
300
0
2000 4000 6000 8000 10000
Number of Distance Computations
Number of Objects
TLAESA
Proposed
Figure 14: The number of distance computations in
10-NN searches.
Thus we can see that this improvement about selection
of the root object is effective.
5.3
Evaluation of Approximation Search
We tested the performance of the approximation
search algorithm. We compared the proposed method
to Ak-LAESA, which is the approximation search algorithm
proposed in (Moreno-Seco, Mico & Oncina
2003).
Each time a distance is computed in Ak-LAESA
, the nearest neighbour candidate is updated
and its value is stored. When the nearest neighbour
object is found, the best k objects are chosen from the
stored values. In Ak-LAESA, the number of distance
computations of the k-NN search is exactly the same
as that of the NN search.
To compare the proposed method with Ak-LAESA
, we examined how many objects in the approximate
solutions exist in the optimal solutions.
Thus, we define the error rate E as follows:
E[%] = |{x
i
|x
i
/
Opt, i = 1, 2, , k}|
k
100 (12)
where {x
1
, x
2
, , x
k
} is a set of k objects which are
obtained by an approximation algorithm and Opt is
a set of k closest objects to the query object.
Fig. 15 shows the error rate when the value of is
changed in 10-NN searches. Fig. 16 also shows the relation
of the number of distance computations to the
value of in 10-NN searches. In the range 0.5,
the proposed method shows the lower error rate than
0
10
20
30
40
50
60
70
80
90
100
0
0.2
0.4
0.6
0.8
1
Error Rate
[%]
Ak-LAESA
Proposed
Figure 15: Error rate in 10-NN searches.
0
20
40
60
80
100
120
140
160
0
0.2
0.4
0.6
0.8
1
Number of distance computations
Ak-LAESA
Proposed
Figure 16: Relation of the number of distance computations
to the value of in 10-NN searches.
Ak-LAESA. In particular, the error rate of the proposed
method is almost 0 in range 0.9. From two
figures, we can control the error rate and the number
of distance computations by changing the value of .
For example, the proposed method with = 0.9 reduces
abount 28.6% of distance computations and its
error rate is almost 0.
Then we examined the accuracy of the approximate
solutions.
We used = 0.5 for the proposed
method because the error rate of the proposed
method with = 0.5 is equal to the one of Ak-LAESA
. We performed 10-NN searches 10000 times
for each method and examined the distribution of kth
approximate solution to kth optimal solution. We
show the results in Figs. 17 and 18. In each figure,
x axis represents the distance between a query object
q and the kth object in the optimal solution. y
axis shows the distance between q and the kth object
in the approximate solution. The point near the
line y = x represents that kth approximate solution is
very close to kth optimal solution. In Fig. 17, many
points are widely distributed. In the worst case, some
appriximate solutions reach about 3 times of the optimal
solution. From these figures, we can see that
the accuracy of solution by the proposed method is
superior to the one by Ak-LAESA. We also show the
result with = 0.9 in Fig. 19. Most points exist near
the line y = x.
Though Ak-LAESA can reduce drastically the
number of distance computations, its approximate solutions
are often far from the optimal solutions. On
the other hand, the proposed method can reduce the
number of distance computations to some extent with
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0
0.2
0.4
0.6
0.8
Distance to the
k
th Approximate Solution
Distance to the k th Optimal Solution
Figure 17: The distribution of the approximate solution
by Ak-LAESA to the optimal solution.
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0
0.2
0.4
0.6
0.8
Distance to the
k
th Approximate Solution
Distance to the k th Optimal Solution
Figure 18: The distribution the approximate solution
by the proposed method with = 0.5 to the optimal
solution.
very low error rate. Moreover, the accuracy of its approximate
solutions is superior to that of Ak-LAESA.
Conclusions
In this paper, we proposed some improvements of
TLAESA. In order to reduce the number of distance
computations in TLAESA, we improved the search
algorithm to best first order from depth first order
and the tree structure to a multiway tree from a binary
tree. In the 1-NN searches and 10-NN searches
in a 8-dimensional space, the proposed method reduced
about 40% of distance computations. We then
proposed the selection method of root object in the
search tree. This improvement is very simple but is
effective to reduce the number of accesses to the distance
matrix. Finally, we extended our method to an
approximation k-NN search algorithm that can ensure
the quality of solutions. The approximate solutions
of the proposed method are suppressed by
1
times of the optimal solutions. Experimental results
show that the proposed method can reduce the number
of distance computations with very low error rate
by selecting the appropriate value of , and that the
accuracy of the solutions is superior to Ak-LAESA.
From these viewpoints, the method presented in this
paper is very effective when the distance computations
are time-consuming.
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0
0.2
0.4
0.6
0.8
Distance to the
k
th Approximate Solution
Distance to the k th Optimal Solution
Figure 19: The distribution the approximate solution
by the proposed method with = 0.9 to the optimal
solution.
References
Ciaccia, P., Patella, M. & Zezula, P. (1997), M-tree:
An efficient access method for similarity search
in metric spaces, in `Proceedings of the 23rd
International Conference on Very Large Data
Bases (VLDB'97)', pp. 426435.
Hjaltason, G. R. & Samet, H. (2003), `Index-driven
similarity search in metric spaces', ACM Transactions
on Database Systems 28(4), 517580.
Mico, L. & Oncina, J. (1998), `Comparison of fast
nearest neighbour classifiers for handwritten
character recognition', Pattern Recognition Letters
19(3-4), 351356.
Mico, L., Oncina, J. & Carrasco, R. C. (1996), `A
fast branch & bound nearest neighbour classifier
in metric spaces', Pattern Recognition Letters
17(7), 731739.
Mico, M. L., Oncina, J. & Vidal, E. (1994), `A new
version of the nearest-neighbour approximating
and eliminating search algorithm (AESA) with
linear preprocessing time and memory require-ments'
, Pattern Recognition Letters 15(1), 917.
Moreno-Seco, F., Mico, L. & Oncina, J. (2002),
`Extending LAESA fast nearest neighbour algorithm
to find the k-nearest neighbours', Lecture
Notes in Computer Science - Lecture Notes in
Artificial Intelligence 2396, 691699.
Moreno-Seco, F., Mico, L. & Oncina, J. (2003), `A
modification of the LAESA algorithm for ap-proximated
k-NN classification', Pattern Recognition
Letters 24(1-3), 4753.
Navarro, G. (2002), `Searching in metric spaces
by spatial approximation', The VLDB Journal
11(1), 2846.
Rico-Juan, J. R. & Mico, L. (2003), `Comparison
of AESA and LAESA search algorithms using
string and tree-edit-distances', Pattern Recognition
Letters 24(9-10), 14171426.
Vidal, E. (1986), `An algorithm for finding nearest
neighbours in (approximately) constant average
time', Pattern Recognition Letters 4(3), 145157.
Yianilos, P. N. (1993), Data structures and algorithms
for nearest neighbor search in general
metric spaces, in `SODA '93: Proceedings of the
fourth annual ACM-SIAM Symposium on Discrete
algorithms', pp. 311321. | Approximation Search;TLAESA;Distance Computaion;k Nearest Neighbour Search;Nearest Neighbour Search |
107 | Improving the Static Analysis of Embedded Languages via Partial Evaluation | Programs in embedded languages contain invariants that are not automatically detected or enforced by their host language. We show how to use macros to easily implement partial evaluation of embedded interpreters in order to capture invariants encoded in embedded programs and render them explicit in the terms of their host language . We demonstrate the effectiveness of this technique in improving the results of a value flow analysis. | 1. One Language, Many Languages
Every practical programming language contains small programming
languages. For example, C's
printf
[18] supports a string-based
output formatting language, and Java [3] supports a declarative
sub-language for laying out GUI elements in a window. PLT
Scheme [9] offers at least five such languages: one for formatting
console output; two for regular expression matching; one for sending
queries to a SQL server; and one for laying out HTML pages.
In many cases, though not always, programs in these embedded
special-purpose programming languages are encoded as strings. Library
functions consume these strings and interpret them. Often the
interpreters consume additional arguments, which they use as inputs
to the little programs.
Take a look at this expression in PLT Scheme:
(regexp-match "http://([a-z.]*)/([a-z]*)/" line)
The function
regexp-match
is an interpreter for the regular expression
language. It consumes two arguments: a string in the regular
expression language, which we consider a program, and another
string, which is that program's input. A typical use looks like the
example above. The first string is actually specified at the call site,
while the second string is often given by a variable or an expression
that reads from an input port. The interpreter attempts to match the
regular expression and the second string.
In PLT Scheme, the regular expression language allows programmers
to specify subpatterns via parentheses. Our running example
contains two such subexpressions:
([a-z.]*)
and
([a-z]*)
. If
the regular expression interpreter fails to match the regular expression
and the string, it produces false (
#f
); otherwise it produces a
list with n
+ 1 elements: the first one for the overall match plus one
per subexpression. Say
line
stands for
"http://aaa.bbb.edu/zzz/"
In this case, the regular expression matches the string, and
regexp-match
produces the list
(list "http://aaa.bbb.edu/zzz/"
"aaa.bbb.edu"
"zzz")
The rest of the Scheme program extracts the pieces from this list
and computes with them.
The
regexp-match
expression above is a simplified excerpt from
the PLT Web Server [12]. Here is a slightly larger fragment:
(let ([r (regexp-match
"http://([a-z.]*)/([a-z]*)/" line)])
(if r
(process-url (third r) (dispatch (second r)))
(log-error line)))
Notice how the then-clause of the
if
-expression extracts the second
16
and third elements from
r
without any checks to confirm the length
of the list. After all, the programmer knows that if
r
is not false,
then it is a list of three elements. The embedded program says so;
it is a regular expression and contains two subexpressions.
Unfortunately, the static analysis tools for PLT Scheme cannot
reason on both levels.
MrFlow [20], a static debugger, uses
a constraint-based analysis [22], a version of set-based analysis
[2, 13, 10], to analyze the program and discover potential errors
. If it finds one it can draw a flow graph from the source of the
bad value to the faulty primitive operation. For the
let
-expression
above, MrFlow finds that both
(second r)
and
(third r)
may
raise runtime errors because
r
may not contain enough elements.
In this paper, we show how using Scheme macros to partially evaluate
calls to embedded interpreters such as
regexp-match
greatly
increases the precision of the static analysis. Since we use macros,
library designers can easily implement the partial evaluation, rather
than relying on the host language implementor as they must for ad-hoc
solutions.
In Section 2 we give a brief overview of set-based analysis and MrFlow
. In the next section we explain three examples of embedded
languages and the problems they cause for MrFlow's static analysis
. We then present in Section 4 our general approach to solving
those problems, based on macros. An overview of the macro
system we use is given in Section 5. Section 6 then presents a general
technique for translating embedded interpreters into macros. In
Section 7, we explain the properties of the static analysis that enable
it to find more results in partially evaluated code. Finally, in
Section 8, we show how partially evaluating Scheme programs that
contain embedded programs helps MrFlow in our three examples.
Section 9 presents related work and we conclude in Section 10.
Set-Based Analysis
To explain how the results of a static analysis can be improved by
using partial evaluation of embedded languages, we first need to
describe such an analysis. MrFlow, a static analyzer for DrScheme,
uses a set-based value flow analysis to compute an approximation
of the values that each subexpression of a program might evaluate to
at runtime [22]. The approximation computed for each expression
is a set of abstract values that can be displayed on demand. The
debugger can also draw arrows showing the flow of values through
the program.
Figure 1 displays an example of analyzing a simple program.
In the box next to the term
3
is the abstract value for that term,
meaning that at runtime the term
3
might evaluate to the value 3.
The arrow starting from the term
3
shows that at runtime the value
3 might flow into the argument
x
of the function
f
and from there
flow into the reference to the variable
x
in the body of
f
. There
is a second reference to
x
in
f
--the corresponding arrow is not
shown in this example. In the box next to the call to the Scheme
primitive
gcd
is the abstract value for the result of that call. Since
the analysis never tries to evaluate expressions, it uses the abstract
value integer to represent the result of the primitive call, if any,
which is a conservative approximation of the actual value that that
call might compute at runtime.
The biggest box displays the type of the adjacent
if
-expression,
which is the union of the integer abstract value computed by the
gcd
primitive and of the string "hello". Arrows show that the result of
the
if
-expression can come from both the then- and else-branches:
the analysis does not attempt to apply the
number?
predicate to the
variable
x
, so it conservatively assumes that both branches of the
if
-expression may be evaluated at runtime.
Three Embedded Languages
We now turn to embedded languages, which are a useful technique
for establishing abstraction layers for a particular design space.
Functional languages are well-suited to writing interpreters for embedded
languages, in which the higher-level embedded language is
implemented as a set of functions in the general purpose host language
and has access to all of its features [15, 16, 24]. But these
abstractions come at a cost for program analysis. In particular, tools
built to examine programs of the host language cannot derive information
for the programs in the embedded languages because they
do not understand the semantics of those languages.
In this section we demonstrate three examples of practical embedded
languages for Scheme and show their negative effects on static
analysis. In the first example, properties of the embedded language
create the possibility of errors that can go undetected by the analysis
. In the next two examples, undetected properties lead to analyses
that are too conservative, resulting in many false positives; that is,
the analysis reports errors that can never actually occur.
3.1
Format Strings
The PLT Scheme library provides a
format
function, similar to C's
sprintf
, which generates a string given a format specifier and a
variable number of additional arguments. The format specifier is
a string containing some combination of literal text and formatting
tags. These tags are interpreted along with the remaining arguments
to construct a formatted string. The
format
function is thus an
interpreter for the format specifier language. The format specifier
is a program in this language and the additional arguments are its
inputs.
To construct its output, the
format
function requires the number
of extra arguments to match the number of format tags, and these
arguments must be of the appropriate type. Consider the example
of displaying an ASCII character and its encoding in hexadecimal:
(format "~c = 0x~x" c n)
In this example, the format specifier, which contains the format tags
"~c"
and
"~x"
and some literal text, expects to consume exactly
two arguments. These arguments must be a character and an integer
, respectively. An incorrect number of arguments or a type
mismatch results in a runtime error.
Unfortunately analysis tools for Scheme such as MrFlow have no
a priori knowledge of the semantics of embedded languages. The
analysis cannot infer any information about the dependencies between
the contents of the format string and the rest of the arguments
without knowledge of the syntax and semantics of the
format
language
. As a result the analysis cannot predict certain categories of
runtime errors, as shown in Figure 2. The application of
format
is
not underlined as an error, even though its arguments appear in the
wrong order and the analysis correctly computes the types of both
c
and
n
.
17
Figure 1. Analyzing a simple program with MrFlow.
3.2
Regular Expressions
Regular expressions are used in all kinds of Scheme programs. The
language of regular expression patterns is embedded in Scheme as
strings. A library of functions interpret these strings as programs
that consume additional arguments as input strings and return either
a list of matched subpatterns or
#f
to indicate failure.
Consider again the excerpt from the PLT Web Server from Section
1. Programmers know that if the match succeeds, then the
result list contains exactly three elements: the result of the entire
match, and the results of the two subpattern matches. Again the
analysis is unable to discover this invariant on its own. Figure 3
shows the results of analyzing the sample code with MrFlow. The
list accessors
second
and
third
are underlined in red because the
analysis cannot prove that their arguments are sufficiently long lists.
Programmers then must either go through each of these false positives
and prove for themselves that the errors can never occur, or
else learn to ignore some results of MrFlow. Neither option is desirable
. The former creates more work for the programmer, rather
than less; the latter is unsafe and easily leads to overlooked errors.
3.3
SchemeQL
SchemeQL [28] is an embedded language for manipulating relational
databases in Scheme. Unlike the string-based
format
language
, SchemeQL programs consist of special forms directly embedded
inside Scheme. The SchemeQL implementation provides
a set of macros that recognize these forms and expand them into
Scheme code. A typical database query in SchemeQL might look
like this:
(direct-query (name age phone) directory)
corresponding to the SQL statement
SELECT name, age, phone FROM directory
The result of executing a query is a lazy stream representing a cursor
over the result set from the database server. Each element in the
stream is a list of values representing a single row of the result set.
The cursor computes the rows by need when a program selects the
next sub-stream.
Programmers know that the number of elements in each row of a
cursor is equal to the number of columns in the original request.
Our analysis, however, cannot discover this fact automatically. Figure
4 shows the results of an analysis of a SchemeQL query in the
context of a trivial Scheme program. The example query consists of
exactly three columns, and the code references the third element of
the first row. This operation can never fail, but the analysis is unable
to prove this. Instead, it conservatively computes that
row
is a list
of unknown length: rec-type describes a recursive abstract value,
which in the present case is the union of
null
and a pair consisting
of any value (top) and the abstract value itself, creating a loop in the
abstract value that simulates all possible list lengths. MrFlow therefore
mistakenly reports an error by underlining the primitive
third
in red, since, according to the analysis,
row
might have fewer than
three elements at runtime.
Macros for Partial Evaluation
All the embedded languages presented in the previous section have
one thing in common: they can encode invariants that are not visible
to any analysis of the general purpose language in which they
are embedded. These invariants can be exposed to analyses in two
ways:
by extending the analyses in an ad-hoc manner for each embedded
language so that they understand its semantics, or
by partially evaluating the embedded interpreters with regard
to the embedded programs to make the invariants in the embedded
programs explicit as invariants in the host language,
whenever possible.
The first solution requires modifying each analysis to support each
embedded language. The second solution can simply be implemented
from within the host language through the old Lisp trick of
using "compiler macros" [25] as a light-weight partial evaluation
mechanism. In the present case, instead of using partial evaluation
to optimize programs for speed, we use it to increase the precision
of program analyses.
While Lisp's compiler macros are different from regular Lisp
macros, Scheme's macro system is powerful enough that the equivalent
of Lisp's compiler macros can be implemented as regular
Scheme macros. The partial evaluation of embedded interpreters
then simply involves replacing the libraries of functions imple-18
Figure 2. Imprecise analysis of the
format
primitive.
Figure 3. Imprecise analysis of
regexp-match
.
Figure 4. Imprecise analysis of a SchemeQL query.
19
menting the interpreters with libraries of semantically equivalent
macros
1
. This has the additional advantage that it can be done by
the author of the library of functions, as opposed to the compiler's
or analyzer's implementor in the case of ad-hoc extensions.
Of course, the partial evaluation of embedded interpreters is only
possible when their input programs are known statically. For example
, it is not possible to expand a call to
format
if the formatting
string given as its first argument is computed at runtime. The
programmer therefore makes a trade-off between the precision of
analyses and how dynamic the code can be. In practice, though,
the embedded programs are often specified statically in user code.
Combined with the simplicity of implementing partial evaluation
with macros, this makes for a useful technique for improving the
precision of analyses at a low cost.
In the next two sections, we describe some of the important features
of the Scheme macro system and then explain how we make use of
this system to partially evaluate the interpreters of these embedded
languages to improve the results of static analysis.
Macros in Scheme
Scheme has a powerful macro system for extending the language
with derived expression forms that can be rewritten as expressions
in the core language. Macros serve as a means of syntactic abstraction
. Programmers can generalize syntactic patterns in ways that
are not possible with functional abstraction. This technology also
provides a hook into the standard compiler tool chain by allowing
programmers to implement additional program transformations before
compilation. In this section we describe the basics of standard
Scheme macros and introduce identifier macros, a generalization of
the contexts in which macros can be matched.
5.1
Rule-Based Macros
The
define-syntax
special form allows the programmer to extend
Scheme with derived expression forms. Before compilation or execution
of a Scheme program, all occurrences of these derived forms
are replaced with their specified expansions.
The
syntax-rules
form specifies macro expansions as rewrite
rules. Consider the following simple macro, which defines a short-circuit
logical
or
as a derived form:
(define-syntax or
(syntax-rules ()
[(or e1 e2)
(let ([tmp e1])
(if tmp tmp e2))]))
The macro defines a single rewrite rule, consisting of a pattern and
a template. The pattern matches the
or
keyword in operator position
followed by two pattern variables
e1
and
e2
, each matching
an arbitrary subexpression in argument position. The template directs
the macro expansion to replace occurrences of the matched
pattern with a
let
-expression constructed from the matched subexpressions
.
1
The transformation is not strictly speaking partial evaluation:
the reductions performed by the macros are not exactly the ones performed
by the embedded interpreters. However, the macros share
the techniques and issues of partial evaluation since they simulate
parts of the interpreters, and it is therefore useful to describe them
as such.
Notice that this
or
form cannot be defined as a regular function
in Scheme. The second argument is only evaluated if the first argument
evaluates to false. Since Scheme has a strict evaluation
semantics, a functional
or
would necessarily evaluate both of its
arguments before computing a result. Controlling the evaluation of
expressions is an important use of Scheme macros. Macros can also
abstract over other syntactic forms in ways that functions cannot by
expanding into second-class language constructs such as
define
.
5.2
Lexical Scope
Macros written with the standard Scheme
syntax-rules
mechanism
are both hygienic and referentially transparent. Hygienic
macro expansion guarantees that binding forms inside the definition
of the macro template do not capture free variables in macro
arguments. Consider the following use of our
or
macro:
2
(or other tmp)
(let ([tmp
1
other])
(if tmp
1
tmp
1
tmp))
Hygienic expansion automatically renames the variable bound inside
the expanded macro template to avoid capturing the free variable
in the macro argument.
Referential transparency complements hygiene by ensuring that
free variables inside the macro template cannot be captured by the
context of the macro call site. For example, if the context that invokes
or
rebinds the
if
name, the expansion algorithm renames the
binding in the caller's context to avoid capturing the variable used
in the template body:
(let ([if 3])
(or if #f))
(let ([if
1
3])
(let ([tmp if
1
])
(if tmp tmp #f)))
The combination of hygiene and referential transparency produces
macros that are consistent with Scheme's rules of lexical scope and
can be invoked anywhere in a program without the danger of unexpected
variable capture.
3
5.3
Identifier Macros
The
syntax-rules
form only matches expressions in which the
macro name occurs in "application position," i.e., as the operator in
an application expression. References to a
syntax-rules
macro in
other contexts result in syntax errors:
(fold or #f ls)
syntax error
PLT
Scheme's
syntax-id-rules
form
is
similar
to
syntax-rules
but matches occurrences of the macro keyword
in arbitrary expression contexts: in operator position, operand
position, or as the target of an assignment.
2
We use the convention of representing macro expansion with a
double-arrow (
) and ordinary (runtime) evaluation with a single-arrow
(
).
3
Macros can also be defined in and exported from modules in
PLT Scheme [11].
20
The following macro demonstrates a hypothetical use of
syntax-id-rules
:
(define-syntax clock
(syntax-id-rules (set!)
[(set! clock e) (set-clock! e)]
[(clock e) (make-time-stamp (get-clock) e)]
[clock (get-clock)]))
The list of identifiers following
syntax-id-rules
, which was
empty in our previous examples, now includes the
set!
identifier
, indicating that
set!
is to be treated as a keyword rather than a
pattern variable. The first rewrite rule matches expressions in which
the
clock
name occurs as the target of an assignment. The second
rule is familiar, matching the macro in application position. The final
rule matches the identifier
clock
in any context not matched by
the previous two rules. In addition to the usual application context,
we can use the
clock
macro in an argument position:
(+ clock 10)
(+ (get-clock) 10)
or as a
set!
target:
(set! clock 5)
(set-clock! 5)
5.4
Programmatic Macros
The
language
of
patterns
and
templates
recognized
by
syntax-rules
and
syntax-id-rules
is actually a special
case of Scheme macros.
In general, the
define-syntax
form
binds a transformer procedure
(define-syntax
name
(lambda (stx)
etc
.
The argument to the transformer procedure is a syntax object, which
is similar to an S-expression representing quoted code, but which
also encapsulates information about the lexical context of the code,
such as source file location and variable bindings. This context
information is essential in allowing DrScheme's language tools to
trace errors and binding relationships back to the original source location
in the user's code where a macro is invoked. Because syntax
objects are so similar to quoted data, the standard library includes
the
syntax-object->datum
procedure, which strips the lexical information
from a syntax object and returns its corresponding datum.
For example, the datum corresponding to a syntax object representing
a literal number is its numeric value, the datum corresponding
to an identifier is a symbol representing the identifier's name, and
so on.
A syntax transformer procedure accepts as its argument a syntax
object representing the expression that invoked the macro,
and produces a new syntax object, which the macro expansion
algorithm uses to replace the original expression.
All Scheme
macros are syntax transformers; although the
syntax-rules
and
syntax-id-rules
forms do not use the
lambda
notation, they are
themselves implemented as macros that expand to syntax transformer
procedures.
The
syntax-case
facility allows the construction of macros with
pattern matching, as with
syntax-rules
and
syntax-id-rules
,
but with arbitrary expressions in place of templates for the result
expressions. For example, the above
or
macro would be defined as:
(define-syntax or
(lambda (stx)
(syntax-case stx ()
[(or e1 e2)
#'(let ([tmp e1])
(if tmp tmp e2))])))
The macro is almost the same as before, but for two refinements.
First, the
syntax-case
form takes the argument
stx
explicitly,
whereas
syntax-rules
implicitly defines a transformer procedure
and operates on the procedure argument. Second, the result expression
is prefixed by the syntax-quoting
#'
operator, which is
analogous to Scheme's
quote
operator
'
. Whereas an expression
prefixed with
'
evaluates to a quoted S-expression, a
#'
expression
becomes a quoted syntax object that also includes lexical information
. Similarly, the quasisyntax operator
#`
and unsyntax operator
#,
behave for syntax objects like the quasiquote and unquote operators
for S-expressions, respectively.
The use of arbitrary computations in the result expression allows
macros to expand differently based on the results of actual computations
:
(define-syntax swap
(lambda (stx)
(syntax-case stx ()
[(swap a b)
(if (and (identifier? #'a)
(identifier? #'b))
#'(let ([tmp b])
(set! b a)
(set! a tmp))
(raise-syntax-error
'swap "expects identifiers"
stx))])))
In this example, if
swap
is not given identifiers as arguments, the
raise-syntax-error
function uses the lexical information in the
stx
syntax object to highlight the original
swap
expression in the
user's code.
Conditional matching can also be achieved using pattern guards,
which can inspect a matched expression and determine whether to
accept a match:
(define-syntax swap
(lambda (stx)
(syntax-case stx ()
[(swap a b)
(and (identifier? #'a)
(identifier? #'b))
#'(let ([tmp b])
(set! b a)
(set! a tmp))])))
The pattern guard is a new expression, inserted between the pattern
and the result expressions. A guarded match only succeeds if its
guard does not evaluate to false; when a guard fails, the pattern
matcher falls through to attempt the next pattern in the list.
Macros for Interpreters
In this section, we present a general technique for specializing embedded
interpreters with macros, and explain how we apply this
technique to the three embedded languages described in Section 3.
21
The technique can be summarized in the following steps:
1. Write the interpreter compositionally as a module of library
functions.
2. Replace the interpreter's main function with a macro that unfolds
the case dispatch on the input (the embedded program)
when it is known statically.
3. Default to the original function when the input is not known
at compile time.
Writing the interpreters compositionally serves two purposes. First,
by delegating the interpretation of the program constructs that make
up an embedded program to separate functions, it becomes possible
to share code between the original interpreter and the macro
that replaces it. This effectively limits the macro's responsibility
to a simple dispatch. Second, compositionality makes it easier to
guarantee that unfolding terminates, since the recursive macro calls
always operate on smaller terms.
6.1
Format Strings
The implementation of a string formatter involves a number of simple
library functions to convert each possible type of argument to
strings. Each formatting tag corresponds to one of these combinators
. For example, the
"~c"
tag corresponds to a combinator,
format/char
, which accepts a character and converts it to a string,
the
"~x"
tag corresponds to
format/hex
, which converts integers
to their hexadecimal representation, and so forth. The string formatter
then simply dispatches to these combinators based on the
content of the formatting string:
(define (format s . args)
(cond
[(string=? s "") ""]
[(string=? (substring s 0 2) "~c")
(string-append (format/char (car args))
(apply format
(substring s 2)
(cdr args)))]
etc
.
))
The interpreter accepts the formatting string
s
and, based on formatting
tags like
"~c"
that it finds, decomposes the string into a
series of applications of the corresponding combinators to successive
arguments of
format
(represented by
args
). It reassembles
the transformed pieces with the standard
string-append
function.
In order to specialize the
format
interpreter, we replace it with a
macro that re-uses its associated combinators:
(define (format/dynamic s . args)
as before
)
(define-syntax format
(lambda (stx)
(syntax-case stx ()
[(format s-exp a1 a2 ...)
(string? (syntax-object->datum #'s-exp))
(let ([s (syntax-object->datum #'s-exp)])
(cond
[(string=? s "") #'""]
[(string=? (substring s 0 2) "~c")
#`(string-append
(format/char a1)
(format #,(substring s 2) a2 ...))]
etc
.
))]
[(format s-exp a1 a2 ...)
#'(format/dynamic s-exp a1 a2 ...)]
[format
(identifier? #'format)
#'format/dynamic])))
The partial evaluation works by unfolding the interpreter's top-level
case dispatch on the program text. Rather than delaying the inspection
of the string to runtime, the macro precomputes the result of the
decomposition statically whenever the string is given as a literal.
We can identify literal strings through the use of a pattern guard.
More precisely, the macro can inspect the syntax object
s-exp
,
corresponding to
format
's first argument, and determine whether
it can be converted to a string via
syntax-object->datum
. When
the conversion succeeds, the pattern guard allows the match to succeed
, and partial evaluation proceeds.
After the macro expansion, the resulting program text consists of
the application of
string-append
to the calls to the library functions
, with no references to the interpreter:
(format "~c = 0x~x" c n)
(string-append (format/char c)
" = 0x"
(format/hex n))
In order for the replacement of the original function with a macro
to be unobservable, the macro must behave exactly like the original
function in all contexts. When
format
is applied to a dynamic
formatting string, the macro defaults to the original functional implementation
. Similarly, when
format
is passed as an argument to
a higher-order function, we use the technique of identifier macros
to refer to the original function.
4
6.2
Regular Expressions
One of PLT Scheme's regular expression engines uses the two-continuation
model of backtracking [1].
A regular expression
"matcher" is represented as a function that accepts a success continuation
and a failure continuation. When a matcher succeeds in
matching its input, it applies its success continuation to the accepted
input, and when it fails to match, it invokes its failure continuation.
This allows the interpretation of the alternation operator "
|
" to try
each alternate pattern sequentially: an alternation matcher tries to
match its first pattern with a failure continuation to try the second
pattern. Thus if the first pattern fails, the matcher invokes the failure
continuation, which tries the second pattern. Otherwise, the failure
continuation is disregarded and the matcher applies its success continuation
, which skips the second pattern and returns the result of
the first match.
Each of the regular expression constructions corresponds to a functional
combinator that produces a matcher.
These combinators
can express the standard operators of regular expressions: success
, failure, alternation, concatenation, and repetition (i.e., Kleene
star).
There is also a
submatch
combinator for the parenthesized
subpatterns in the original regular expression. A successful
regexp-match
returns a list with the entire matched string followed
by each submatch corresponding to a parenthesized subpattern
. Any subpattern that does not match corresponds to an entry of
false (
#f
) in the result list. For example, the following successful
4
The case of
set!
is not critical since, in PLT Scheme, imported
module references cannot be the target of an assignment.
22
match contains a failed submatch:
(regexp-match "a((b)|(c))" "ac")
(list "ac" "c" #f "c")
Regardless of the contents of the second argument, there is always
exactly one element in the result list for each parenthesized subpattern
in the regular expression. The
submatch
operator accomplishes
this by wrapping a given matcher with continuations that
add either the result of a successful match or false to a list of indexed
submatches accumulated during the match. The initial (success
) continuation for
regexp-match
sorts the accumulated list of
indexed submatches, adding false entries for all submatches that
were never reached because of backtracking.
Partial evaluation of the regular expression library works by unfolding
the definitions of the combinators as well as the contents of
the initial continuation. Each application of a combinator gets replaced
by an application of a copy of the body of the combinator's
definition.
5
The recursive code that constructs the result list in the
success continuation gets expanded into an explicit chain of
cons
expressions:
(regexp-match "a((b)|(c))" input)
((build-matcher input)
(lambda (subs)
(cons (lookup subs 0)
(cons (lookup subs 1)
(cons (lookup subs 2)
(cons (lookup subs 3) null)))))
(lambda () #f))
Since the size of the result list is known, it is possible to unfold
recursive definitions, such as the initial continuation that constructs
the match result, to make the structure of the result explicit.
Finally, in the cases where the embedded program is not known statically
, or when
regexp-match
is used in non-application contexts,
the macro expands to the original functional definition.
6.3
SchemeQL
The SchemeQL language differs from the other examples in that its
programs are not embedded as strings but rather as special forms
recognized by a library of macros. This means that for queries
that select from a fixed set of columns, the length of cursor rows
is always known statically; the column names are specified as a
sequence of identifiers in the syntax of the query form.
Just as the interpreters for the string-based embedded programs
perform a case dispatch on the contents of program strings, the
SchemeQL macros dispatch on the shape of the query expressions.
The cases where partial evaluation is possible can be captured by
inserting additional rules into the original library's macros.
Partial evaluation of SchemeQL queries uses the same technique as
for the regular expression library: the recursive function that constructs
a cursor row is unfolded into an explicit chain of
cons
expressions
. Since we know the length of the cursor row statically,
the unfolding is guaranteed to terminate.
5
It is convenient to define the Kleene star operator recursively
by p
= (pp
)|
. However, this non-compositional definition leads
to an infinite macro expansion, so the macro must carefully avoid
unfolding such a definition.
Since the SchemeQL library is implemented as macros, there is no
need to capture the cases where the query forms are used in non-application
contexts. Adding special cases to the existing macro
does not affect its set of allowable contexts. Similarly, the cases
where the row length is not known statically are already handled by
the existing SchemeQL macros.
Static Analysis for Scheme
MrFlow's value flow analysis is an extension of an ordinary set-based
closure analysis like Palsberg's [22]. For every expression in
a program, MrFlow statically computes a conservative approximation
of the set of values to which the expression might evaluate at
runtime. From a given expression it creates a graph that simulates
the flow of values inside the expression. The analysis simulates
evaluation by propagating abstract values in this graph until reaching
a fixed point. From the set of abstract values that propagate to
a given node, the analysis reconstructs a type that is then displayed
to the user through DrScheme's graphical interface.
Extensions to the basic analysis include, among other things: analyzing
functions that can take any number of arguments, analyzing
assignments to variables (
set!
), and analyzing generative data
structure definitions. MrFlow also supports all the primitives defined
in R
5
RS [17]. The vast majority of these primitives are defined
using a special, type-like language embedded inside the analyzer
. For a given primitive, the corresponding type translates to
a graph that simulates the primitive's internal flows. The analysis
then proceeds just like for any other expression. The few remaining
primitives need special handling because of their imperative nature
(
set-car!
or
vector-fill!
) and are analyzed in an ad-hoc manner
.
By default, MrFlow analyzes the
format
primitive based on the
following pseudo-type description:
(string top *-> string)
The
*
in the
*->
constructor means that the primitive is a function
that can take any number of arguments as input beyond the ones
explicitly specified. In the present case, the function must receive
a string as its first argument, followed by any number of arguments
of any type (represented by the pseudo-type
top
), and returns a
string. Given such a description, the only errors MrFlow detects are
when the primitive is given something other than a string as first
argument, or if it is given no argument at all.
After partial evaluation, the application of
format
is replaced by
calls to its individual library functions such as
format/char
and
format/hex
. These functions have respectively the pseudo-types
(char -> string)
and
(integer -> string)
Using this more precise information, MrFlow can detect arguments
to the original
format
call that have the wrong type. Checking that
the
format
primitive receives the right number of arguments for
a given formatting string happens during partial evaluation, so the
analyzer never sees arity errors in the expanded code.
Since DrScheme's syntax object system keeps track of program
terms through the macro expansions [11], MrFlow is then able to
trace detected errors back to the original guilty terms in the user's
23
program and flag them graphically. Arrows representing the flow
of values can also be displayed interactively in terms of the original
program, allowing the user to track in the program the sources of
the values that triggered the errors. In essence, the only requirement
for MrFlow to analyze the partially evaluated code of
format
is to
specify the pseudo-types for the library functions introduced by the
transformations, like
format/char
6
.
Similarly, it is enough to define pseudo-types for the functions
used in the partially evaluated form of SchemeQL's
query
to have
MrFlow automatically compute precise results without any further
modifications.
The partial evaluation for regular expressions is more challenging.
Consider the example from Section 1:
(let ([r (regexp-match
"http://([a-z.]*)/([a-z]*)/" line)])
(if r
(process-url (third r) (dispatch (second r)))
(log-error)))
After the call to
regexp-match
, the variable
r
can be either a list
of three elements or false. Based on its conservative pseudo-type
specification for
regexp-match
, MrFlow computes that
r
can be
either a list of unknown length or false. This in turn triggers two
errors for each of the
second
and
third
primitives: one error because
the primitive might be applied to false when it expected a list,
and one error because it might be applied to a list that is too short.
The second kind of false positives can be removed by partially evaluating
regexp-match
to make the structure of the result more explicit
to MrFlow, as described in Section 6.2. The analysis then
determines that the primitive returns either a list of three elements
or false and in turn checks that
second
and
third
are applied to a
list with enough elements.
Still, the possible return values of
regexp-match
may contain
false. Indeed, false will be the value returned at runtime if the line
given to
regexp-match
does not match the pattern. The programmer
has to test for such a condition explicitly before processing
the result any further. The only way for MrFlow not to show a false
positive for
second
and
third
, because of the presence of this false
value, is to make the analysis aware of the dependency between the
test of
r
and the two branches of the
if
-expression. This form of
flow-sensitive analysis for
if
-expressions is difficult to implement
in general since there is no bound to the complexity of the tested expression
. In practice, however, an appreciable proportion of these
tests are simple enough that an ad-hoc solution is sufficient.
In the case where the test is simply a variable reference it is enough
to create two corresponding ghost variables, one for each branch
of the
if
, establish filtering flows between the variable
r
and the
two ghost variables, and make sure each ghost variable binds the
r
variable references in its respective branch of the
if
-expression.
The filtering flows prevent the false abstract value from flowing into
the then branch of the
if
-expression and prevent everything but the
false value from flowing into the else branch. Only the combination
of this flow sensitivity for
if
-expressions with the partial evaluation
of
regexp-match
gives analysis results with no false positives.
6
Specifying such pseudo-types will not even be necessary once
MrFlow knows how to analyze PLT Scheme contracts. This is the
subject of a forthcoming paper.
Once flow-sensitive analysis of
if
-expressions is added and
pseudo-type descriptions of the necessary primitives are provided
to the analysis, partial evaluation makes all the false positives described
in Section 3 disappear, as we illustrate in the next section.
Improvement of Static Analysis
Partially evaluating
format
eliminates the possibility of runtime
arity errors, since the macro transformations can statically check
such invariants. It also allows MrFlow to detect type errors that
it could not detect before, since the corresponding invariants were
described only in the embedded formatting language. These invariants
are now explicit at the Scheme level in the transformed
program through the use of simpler primitives like
format/char
or
format/integer
. Figure 5 shows the same program as in Figure
2, but after applying partial evaluation. The
format
primitive
is now blamed for two type errors that before could be found only
at runtime. The error messages show that the user simply gave the
arguments
n
and
c
in the wrong order.
Similarly, specializing the regular expression engine with respect
to a pattern eliminates false positives. The length of the list returned
by
regexp-match
cannot be directly computed by the analysis
since that information is hidden inside the regular expression
pattern. As a result, the applications of
second
and
third
in Figure
3 are flagged as potential runtime errors (we have omitted the
fairly large error messages from the figure). After specialization,
the structure of the value returned by
regexp-match
is exposed to
the analysis and MrFlow can then prove that if
regexp-match
returns
a list, it must contain three elements. The false positives for
second
and
third
disappear in Figure 6.
Of course,
regexp-match
can also return false at runtime, and the
analysis correctly predicts this regardless of whether partial evaluation
is used or not. Adding flow sensitivity for
if
-expressions
as described in Section 7 removes these last spurious errors in Figure
6.
Partial evaluation now allows the precise analysis of SchemeQL
queries as well. Figure 7 shows the precise analysis of the same
program as in Figure 4, this time after partial evaluation. As with
regexp-match
, the analysis previously computed that
cursor-car
could return a list of any length, and therefore flagged the call to
third
as a potential runtime error. This call is now free of spurious
errors since the partial evaluation exposes enough structure of the
list returned by
cursor-car
that MrFlow can compute its exact
length and verify that
third
cannot fail at runtime.
While the results computed by the analysis become more precise,
partially evaluating the interpreters for any of the three embedded
languages we use in this paper results in code that is bigger than the
original program. Bigger code in turn means that analyses will take
more time to complete. There is therefore a trade-off between precision
and efficiency of the analyses. We intend to turn that trade-off
into a user option in MrFlow. The user might also exercise full
control over which embedded languages are partially evaluated and
where by using either the functional or macro versions of the embedded
languages' interpreters, switching between the two through
the judicious use of a module system, for example [11].
Note that partial evaluation does not always benefit all analyses. In
the
regexp-match
example from Figure 6, spurious errors disappear
because MrFlow has been able to prove that the list
r
is of
length three and therefore that applying the primitives
second
or
24
Figure 5. Precise analysis of the
format
primitive.
Figure 6. Precise analysis of
regexp-match
.
Figure 7. Precise analysis of a SchemeQL query.
25
third
to
r
cannot fail. If the analysis were a Hindley-Milner-like
type system, though, no difference would be seen whether partial
evaluation were used or not. Indeed, while such a type system could
statically prove that the arguments given to
second
or
third
are
lists, is would not attempt to prove that they are lists of the required
length and a runtime test would still be required. Using partial evaluation
to expose such a property to the analysis would therefore be
useless. Simply put, making invariants from embedded programs
explicit in the host language only matters if the system analyzing
the host language cares about those invariants.
This does not mean partial evaluation is always useless when used
in conjunction with a Hindley-Milner type system, though. Partially
evaluating
format
, for example, would allow the type system
to verify that the formatting string agrees with the types of the remaining
arguments. This is in contrast to the ad-hoc solution used
in OCaml [19] to type check the
printf
primitive, or the use of
dependent types in the case of Cayenne [4].
Related Work
Our work is analogous to designing type-safe embedded languages
such as the one for
printf
[21, 4]. Both problems involve determining
static information about programs based on the values
of embedded programs. In some cases, designers of typed languages
simply extend the host language to include specific embedded
languages. The OCaml language, for example, contains a special
library for
printf
[19] and uses of
printf
are type-checked
in an ad-hoc manner. Similarly, the GCC compiler for the C language
uses ad-hoc checking to find errors in
printf
format strings.
Danvy [7] and Hinze [14] suggest implementations of
printf
in
ML and Haskell, respectively, that obviate the need for dependent
types by recasting the library in terms of individual combinators.
In our system, those individual combinators are automatically introduced
during macro expansion. The C++ language [26] likewise
avoids the problem of checking invariants for
printf
by breaking
its functionality into smaller operations that do not require the use
of an embedded formatting language.
A work more closely related to ours is the Cayenne language [4].
Augustsson uses a form of partial evaluation to specialize dependent
types into regular Haskell-like types that can then be used by
the type system to check the user's program. Our macro system
uses macro-expansion time computation to specialize expressions
so that the subsequent flow analysis can compute precise value flow
results. Augustsson's dependent type system uses computation performed
at type-checking time to specialize dependent types so that
the rest of the type checking can compute precise type information.
The specialization is done in his system through the use of type-computing
functions that are specified by the user and evaluated by
the type system.
The main difference is that his system is used to compute specialized
types and verify that the program is safe. Once the original
program has been typed it is just compiled as-is with type checking
turned off. This means that in the case of
format
, for example,
the formatting string is processed twice: once at type checking time
to prove the safety of the program, and once again at run time to
compute the actual result. Our system is used to compute specialized
expressions. This means that the evaluation of the
format
's
string needs to be done only once. Once specialized, the same program
can either be run or analyzed to prove its safety. In both cases
the format string will not have to be reprocessed since it has been
completely replaced by more specialized code.
Another difference is that in our system, non-specialized programs
are still valid programs that can be analyzed, proved safe, and run
(though the result of the analysis will probably be more conservative
than when analyzing the corresponding partially evaluated
program, so proving safety might be more difficult). This is not
possible in Cayenne since programs with dependent types cannot
be run without going through the partial evaluation phase first.
Much work has gone into optimization of embedded languages.
Hudak [15], Elliott et al [8], Backhouse [5], Christensen [6], and
Veldhuizen [27] all discuss the use of partial evaluation to improve
the efficiency of embedded languages, although none makes the
connection between partial evaluation and static analysis. In Back-house's
thesis he discusses the need to improve error checking for
embedded languages, but he erroneously concludes that "syntactic
analyses cannot be used due to the embedded nature of domain-specific
embedded languages."
The Lisp programming language ([25], Section 8.4) provides for
"compiler macros" that programmers can use to create optimized
versions of existing functions. The compiler is not required to use
them, though. To our knowledge, there is no literature showing
how to use these compiler macros to improve the results of static
analyses. Lisp also has support for inlining functions, which might
help monovariant analyses by duplicating the code of a function at
all its call sites, thereby simulating polyvariant analyses.
Bigloo [23] is a Scheme compiler that routinely implements embedded
languages via macros and thus probably provides some of
the benefits presented in this paper to the compiler's internal analyses
. The compiler has a switch to "enable optimization by macro
expansion," though there does not seem to be any documentation or
literature describing the exact effect of using that switch.
Conclusion
Programs in embedded languages contain invariants that are not automatically
enforced by their host language. We have shown that
using macros to partially evaluate interpreters of little languages
embedded in Scheme with respect to their input programs can recapture
these invariants and convey them to a flow analysis. Because
it is based on macros, this technique does not require any
ad-hoc modification of either interpreters or analyses and is thus
readily available to programmers. This makes it a sweet spot in
the programming complexity versus precision landscape of program
analysis. We intend to investigate the relationship between
macros and other program analyses in a similar manner.
Acknowledgments
We thank Matthias Felleisen, Mitchell Wand, and Kenichi Asai for
the discussions that led to this work and for their helpful feedback
. Thanks to Matthew Flatt for his help with the presentation
of Scheme macros. Thanks to Dale Vaillancourt for proofreading
the paper and to Ryan Culpepper for his macrological wizardry.
References
[1] H. Abelson and G. J. Sussman. The Structure and Interpretation
of Computer Programs. MIT Press, Cambridge, MA,
1985.
[2] A. Aiken. Introduction to set constraint-based program analysis
. Science of Computer Programming, 35:79111, 1999.
26
[3] K. Arnold, J. Gosling, and D. Holmes. The Java Programming
Language. Addison-Wesley, 3d edition, 2000.
[4] L. Augustsson. Cayenne--a language with dependent types.
In Proceedings of the third ACM SIGPLAN international conference
on Functional programming, pages 239250. ACM
Press, 1998.
[5] K. Backhouse. Abstract Interpretation of Domain-Specific
Embedded Languages. PhD thesis, Oxford University, 2002.
[6] N. H. Christensen. Domain-specific languages in software development
and the relation to partial evaluation. PhD thesis
, DIKU, Dept. of Computer Science, University of Copenhagen
, Universitetsparken 1, DK-2100 Copenhagen East,
Denmark, July 2003.
[7] O. Danvy. Functional unparsing. Journal of Functional Programming
, 8(6):621625, 1998.
[8] C. Elliott, S. Finne, and O. de Moor. Compiling embedded
languages. In SAIG, pages 927, 2000.
[9] R. B. Findler, J. Clements, M. F. Cormac Flanagan, S. Krishnamurthi
, P. Steckler, and M. Felleisen. DrScheme: A
progamming environment for scheme. Journal of Functional
Programming, 12(2):159182, March 2002.
[10] C. Flanagan and M. Felleisen. Componential set-based analysis
. ACM Trans. on Programming Languages and Systems,
21(2):369415, Feb. 1999.
[11] M. Flatt. Composable and compilable macros: you want it
when? In Proceedings of the seventh ACM SIGPLAN international
conference on Functional programming, pages 7283.
ACM Press, 2002.
[12] P. Graunke, S. Krishnamurthi, S. V. D. Hoeven, and
M. Felleisen.
Programming the web with high-level programming
languages. In Programming Languages and Systems
, 10th European Symposium on Programming, ESOP
2001, Proceedings, volume 2028 of Lecture Notes in Computer
Science, pages 122136, Berlin, Heidelberg, and New
York, 2001. Springer-Verlag.
[13] N. Heintze.
Set Based Program Analysis.
PhD thesis,
Carnegie-Mellon Univ., Pittsburgh, PA, Oct. 1992.
[14] R. Hinze. Formatting: a class act. Journal of Functional
Programming, 13(5):935944, 2003.
[15] P. Hudak. Modular domain specific languages and tools. In
Proceedings of Fifth International Conference on Software
Reuse, pages 134142, June 1998.
[16] S. N. Kamin. Research on domain-specific embedded languages
and program generators. In R. Cleaveland, M. Mis-love
, and P. Mulry, editors, Electronic Notes in Theoretical
Computer Science, volume 14. Elsevier, 2000.
[17] R. Kelsey, W. Clinger, and J. R. [editors]. Revised
5
report
on the algorithmic language Scheme. Higher-Order and Symbolic
Computation, 11(1):7104, August 1998. Also appeared
in SIGPLAN Notices 33:9, September 1998.
[18] B. W. Kernighan and D. M. Ritchie. The C programming language
. Prentice Hall Press, 1988.
[19] X. Leroy. The Objective Caml System, release 3.07, 2003.
http://caml.inria.fr/ocaml/htmlman
.
[20] P. Meunier.
http://www.plt-scheme.org/software/
mrflow
.
[21] M. Neubauer, P. Thiemann, M. Gasbichler, and M. Sperber.
Functional logic overloading. In Proceedings of the 29th ACM
SIGPLAN-SIGACT symposium on Principles of programming
languages, pages 233244. ACM Press, 2002.
[22] J. Palsberg. Closure analysis in constraint form. Proc. ACM
Trans. on Programming Languages and Systems, 17(1):47
62, Jan. 1995.
[23] M. Serrano and P. Weis. Bigloo: A portable and optimizing
compiler for strict functional languages. In Static Analysis
Symposium, pages 366381, 1995.
[24] O. Shivers. A universal scripting framework, or Lambda: the
ultimate "little language". In Proceedings of the Second Asian
Computing Science Conference on Concurrency and Parallelism
, Programming, Networking, and Security, pages 254
265. Springer-Verlag, 1996.
[25] G. L. Steele. COMMON LISP: the language. Digital Press, 12
Crosby Drive, Bedford, MA 01730, USA, 1984. With contributions
by Scott E. Fahlman and Richard P. Gabriel and David
A. Moon and Daniel L. Weinreb.
[26] B. Stroustrup. The C++ Programming Language, Third Edition
. Addison-Wesley Longman Publishing Co., Inc., 1997.
[27] T. L. Veldhuizen. C++ templates as partial evaluation. In Partial
Evaluation and Semantic-Based Program Manipulation,
pages 1318, 1999.
[28] N. Welsh, F. Solsona, and I. Glover.
SchemeUnit and
SchemeQL: Two little languages. In Proceedings of the Third
Workshop on Scheme and Functional Programming, 2002.
27 | macros;interpreter;value flow analysis;flow analysis;set-based analysis;partial evaluation;embedded language;Partial evaluation;regular expression;embedded languages;Scheme |
108 | IncSpan: Incremental Mining of Sequential Patterns in Large Database | Many real life sequence databases grow incrementally. It is undesirable to mine sequential patterns from scratch each time when a small set of sequences grow, or when some new sequences are added into the database. Incremental algorithm should be developed for sequential pattern mining so that mining can be adapted to incremental database updates . However, it is nontrivial to mine sequential patterns incrementally, especially when the existing sequences grow incrementally because such growth may lead to the generation of many new patterns due to the interactions of the growing subsequences with the original ones. In this study, we develop an efficient algorithm, IncSpan, for incremental mining of sequential patterns, by exploring some interesting properties. Our performance study shows that IncSpan outperforms some previously proposed incremental algorithms as well as a non-incremental one with a wide margin. | INTRODUCTION
Sequential pattern mining is an important and active research
topic in data mining [1, 5, 4, 8, 13, 2], with broad
applications, such as customer shopping transaction analysis
, mining web logs, mining DNA sequences, etc.
There have been quite a few sequential pattern or closed
sequential pattern mining algorithms proposed in the previous
work, such as [10, 8, 13, 2, 12, 11], that mine frequent
subsequences from a large sequence database efficiently. These
algorithms work in a one-time fashion: mine the entire
database and obtain the set of results. However, in many
applications, databases are updated incrementally. For example
, customer shopping transaction database is growing
daily due to the appending of newly purchased items for existing
customers for their subsequent purchases and/or insertion
of new shopping sequences for new customers. Other
examples include Weather sequences and patient treatment
sequences which grow incrementally with time. The existing
sequential mining algorithms are not suitable for handling
this situation because the result mined from the old
database is no longer valid on the updated database, and it
is intolerably inefficient to mine the updated databases from
scratch.
There are two kinds of database updates in applications:
(1) inserting new sequences (denoted as INSERT), and (2)
appending new itemsets/items to the existing sequences (denoted
as APPEND). A real application may contain both.
It is easier to handle the first case: INSERT. An important
property of INSERT is that a frequent sequence in
DB = DB
db must be frequent in either DB or db
(or both). If a sequence is infrequent in both DB and db,
it cannot be frequent in DB , as shown in Figure 1. This
property is similar to that of frequent patterns, which has
been used in incremental frequent pattern mining [3, 9, 14].
Such incremental frequent pattern mining algorithms can be
easily extended to handle sequential pattern mining in the
case of INSERT.
It is far trickier to handle the second case, APPEND, than
the first one. This is because not only the appended items
may generate new locally frequent sequences in db, but
also that locally infrequent sequences may contribute their
occurrence count to the same infrequent sequences in the
original database to produce frequent ones. For example,
in the appended database in Figure 1, suppose
|DB|=1000
and
|db|=20, min sup=10%. Suppose a sequence s is in-527
Research Track Poster
s infrequent
s infrequent
DB
s is infrequent in
DB'
sup(s)=99
db=20
sup(s)=1
s is frequent in
DB'
db
|DB| =
1000
Figure 1:
Examples in INSERT and APPEND
database
frequent in DB with 99 occurrences (sup = 9.9%). In addition
, it is also infrequent in db with only 1 occurrence
(sup = 5%).
Although s is infrequent in both DB and
db, it becomes frequent in DB with 100 occurrences. This
problem complicates the incremental mining since one cannot
ignore the infrequent sequences in db, but there are
an exponential number of infrequent sequences even in a
small db and checking them against the set of infrequent
sequences in DB will be very costly.
When the database is updated with a combination of INSERT
and APPEND, we can treat INSERT as a special case
of APPEND treating the inserted sequences as appended
transactions to an empty sequence in the original database.
Then this problem is reduced to APPEND. Therefore, we
focus on the APPEND case in the following discussion.
In this paper, an efficient algorithm, called
IncSpan, is
developed, for incremental mining over multiple database
increments. Several novel ideas are introduced in the algorithm
development: (1) maintaining a set of "almost frequent
" sequences as the candidates in the updated database,
which has several nice properties and leads to efficient techniques
, and (2) two optimization techniques, reverse pattern
matching and shared projection, are designed to improve the
performance. Reverse pattern matching is used for matching
a sequential pattern in a sequence and prune some search
space. Shared projection is designed to reduce the number
of database projections for some sequences which share a
common prefix. Our performance study shows that
IncSpan
is efficient and scalable.
The remaining of the paper is organized as follows. Section
2introduces the basic concepts related to incremental
sequential pattern mining. Section 3 presents the idea of
buffering patterns, several properties of this technique and
the associated method. Section 4 formulates the
IncSpan algorithm
with two optimization techniques. We report and
analyze performance study in Section 5, introduce related
work in Section 6. We conclude our study in Section 7.
PRELIMINARY CONCEPTS
Let I =
{i
1
, i
2
, . . . , i
k
} be a set of all items. A subset
of I is called an itemset. A sequence s = t
1
, t
2
, . . . , t
m
(t
i
I) is an ordered list. The size, |s|, of a sequence is
the number of itemsets in the sequence. The length, l(s),
is the total number of items in the sequence, i.e., l(s) =
n
i=1
|t
i
|. A sequence = a
1
, a
2
, . . . , a
m
is a sub-sequence
of another sequence = b
1
, b
2
, . . . , b
n
, denoted as
(if = , written as
), if and only if i
1
, i
2
, . . . , i
m
,
such that 1
i
1
< i
2
< . . . < i
m
n and a
1
b
i
1
, a
2
b
i
2
, . . . , and a
m
b
i
m
.
A sequence database, D =
{s
1
, s
2
, . . . , s
n
}, is a set of sequences
. The support of a sequence in D is the number
of sequences in D which contain , support() =
|{s|s
D and
s
}|. Given a minimum support threshold,
min sup, a sequence is frequent if its support is no less
than min sup; given a factor
1, a sequence is semi-frequent
if its support is less than min sup but no less
than
min sup; a sequence is infrequent if its support
is less than
min sup. The set of frequent sequential
pattern, F S, includes all the frequent sequences; and the
set of semi-frequent sequential pattern SF S, includes
all the semi-frequent sequences.
EXAMPLE 1. The second column of Table 1 is a sample
sequence database D. If min sup = 3, F S =
{ (a) :
4, (b) : 3, (d) : 4, (b)(d) : 3
}.
Seq ID.
Original Part
Appended Part
0
(a)(h)
(c)
1
(eg)
(a)(bce)
2
(a)(b)(d)
(ck)(l)
3
(b)(df )(a)(b)
4
(a)(d)
5
(be)(d)
Table 1: A Sample Sequence Database D and the
Appended part
Given a sequence s = t
1
, . . . , t
m
and another sequence
s
a
=
t
1
, . . . , t
n
, s = s
s
a
means s concatenates with
s
a
. s is called an appended sequence of s, denoted as
s
a
s. If s
a
is empty, s = s, denoted as s
=
a
s. An
appended sequence database D
of a sequence database
D is one that (1)
s
i
D , s
j
D such that s
i
a
s
j
or
s
i
=
a
s
j
, and (2 )
s
i
D, s
j
D such that s
j
a
s
i
or
s
j
=
a
s
i
. We denote LDB =
{s
i
|s
i
D and s
i
a
s
j
},
i.e., LDB is the set of sequences in D which are appended
with items/itemsets. We denote ODB =
{s
i
|s
i
D and
s
i
a
s
j
}, i.e., ODB is the set of sequences in D which are
appended with items/itemsets in D . We denote the set of
frequent sequences in D as F S .
EXAMPLE 2. The third column of Table 1 is the appended
part of the original database.
If min sup
= 3,
F S
=
{ (a) : 5, (b) : 4, (d) : 4, (b)(d) : 3, (c) :
3, (a)(b) : 3, (a)(c) : 3
}.
A sequential pattern tree T is a tree that represents
the set of frequent subsequences in a database. Each node
p in T has a tag labelled with s or i. s means the node is a
starting item in an itemset; i means the node is an intermediate
item in an itemset. Each node p has a support value
which represents the support of the subsequence starting
from the root of T and ending at the node p.
Problem Statement.
Given a sequence database D,
a min sup threshold, the set of frequent subsequences F S
in D, and an appended sequence database D of D, the
problem of incremental sequential pattern mining is
to mine the set of frequent subsequences F S in D based
on F S instead of mining on D from scratch.
BUFFER SEMI-FREQUENT PATTERNS
In this section, we present the idea of buffering semi-frequent
patterns, study its properties, and design solutions
of how to incrementally mine and maintain F S and SF S.
528
Research Track Poster
<>
<d>s:4
<b>s:3
<a>s:4
<e>s:2
<d>s:3
<d>s:2
<b>s:2
Figure 2: The Sequential Pattern Tree of F S and
SF S in D
3.1
Buffering Semi-frequent Patterns
We buffer semi-frequent patterns, which can be considered
as a statistics-based approach.
The technique is to
lower the min sup by a buffer ratio
1 and keep a set
SF S in the original database D. This is because since the
sequences in SF S are "almost frequent ", most of the frequent
subsequences in the appended database will either
come from SF S or they are already frequent in the original
database. With a minor update to the original database,
it is expected that only a small fraction of subsequences
which were infrequent previously would become frequent.
This is based on the assumption that updates to the original
database have a uniform probability distribution on items.
It is expected that most of the frequent subsequences introduced
by the updated part of the database would come from
the SF S. The SF S forms a kind of boundary (or "buffer
zone") between the frequent subsequences and infrequent
subsequences.
EXAMPLE 3. Given a database D in Example 1, min sup
= 3, = 0.6. The sequential pattern tree T representing
F S and SF S in D is shown in Figure 2. F S are shown in
solid line and SF S in dashed line.
When the database D is updated to D , we have to check
LDB to update support of every sequence in F S and SF S.
There are several possibilities:
1. A pattern which is frequent in D is still frequent in D ;
2. A pattern which is semi-frequent in D becomes frequent
in D ;
3. A pattern which is semi-frequent in D is still semi-frequent
in D ;
4. Appended database db brings new items.
5. A pattern which is infrequent in D becomes frequent in
D
;
6. A pattern which is infrequent in D becomes semi-frequent
in D ;
Case (1)(3) are trivial cases since we already keep the
information. We will consider case (4)(6) now.
Case (4): Appended database db brings new items. For
example, in the database D , (c) is a new item brought by
db. It does not appear in D.
Property: An item which does not appear in D and is
brought by db has no information in F S or SF S.
Solution: Scan the database LDB for single items.
For a new item or an originally infrequent item in D, if
it becomes frequent or semi-frequent, insert it into F S or
SF S. Then use the new frequent item as prefix to construct
projected database and discover frequent and semi-frequent
sequences recursively. For a frequent or semi-frequent item
in D, update its support.
Case (5): A pattern which is infrequent in D becomes
frequent in D . For example, in the database D , (a)(c) is
an example of case (5). It is infrequent in D and becomes
frequent in D . We do not keep (a)(c) in F S or SF S, but
we have the information of its prefix (a) .
Property: If an infrequent sequence p in D becomes
frequent in D , all of its prefix subsequences must also be
frequent in D . Then at least one of its prefix subsequences
p is in F S.
Solution: Start from its frequent prefix p in F S and
construct p-projected database, we will discover p .
Formally stated, given a frequent pattern p in D , we want
to discover whether there is any pattern p with p as prefix
where p was infrequent in D but is frequent in D . A sequence
p which changes from infrequent to frequent must
have sup(p ) > (1
- )min sup.
We claim if a frequent pattern p has support in LDB
sup
LDB
(p)
(1 - )min sup, it is possible that some subsequences
with p as prefix will change from infrequent to
frequent. If sup
LDB
(p) < (1
- )min sup, we can safely
prune search with prefix p.
Theorem 1. For a frequent pattern p, if its support in
LDB sup
LDB
(p) < (1
- )min sup, then there is no sequence
p having p as prefix changing from infrequent in D
to frequent in D .
Proof : p was infrequent in D, so
sup
D
(p ) <
min sup
(1)
If
sup
LDB
(p) < (1
- )min sup, then
sup
LDB
(p )
sup
LDB
(p) < (1
- )min sup
Since sup
LDB
(p ) = sup
ODB
(p ) + sup(p ). Then we
have
sup
LDB
(p )
sup
LDB
(p ) < (1
- )min sup.
(2)
Since sup
D
(p ) = sup
D
(p ) + sup(p ), combining (1)
and (2), we have sup
D
(p ) < min sup. So p cannot be
frequent in D .
Therefore, if a pattern p has support in LDB sup
LDB
(p) <
(1
- )min sup, we can prune search with prefix p. Otherwise
, if sup
LDB
(p)
(1-)min sup, it is possible that some
sequences with p as prefix will change from infrequent to frequent
. In this case, we have to project the whole database
D
using p as prefix. If
|LDB| is small or is small, there are
very few patterns that have sup
LDB
(p)
(1 - )min sup,
making the number of projections small.
In our example, sup
LDB
(a) = 3 > (1
- 0.6) 3, we have
to do the projection with (a) as prefix. And we discover
" (a)(c) : 3" which was infrequent in D. For another example
, sup
LDB
(d) = 1 < (1
- 0.6) 3, there is no sequence
with d as prefix which changes from infrequent to frequent,
so we can prune the search on it.
Theorem 1 provides an effective bound to decide whether
it is necessary to project a database. It is essential to guarantee
the result be complete.
We can see from the projection condition, sup
LDB
(p)
(1
- )min sup, the smaller is, the larger buffer we keep,
the fewer database projections the algorithm needs. The
choice of is heuristic. If is too high, then the buffer is
small and we have to do a lot of database projections to
discover sequences outside of the buffer. If is set very
low, we will keep many subsequences in the buffer. But
mining the buffering patterns using
min sup would be
much more inefficient than with min sup. We will show this
529
Research Track Poster
<>
<d>s:4
<b>s:4
<a>s:5
<c>s:3
<d>s:3
<c>s:3
<d>s:2
<b>s:3
<e>i:2
<e>s:2
Figure 3: The Sequential Pattern Tree of F S and
SF S in D
tradeoff through experiments in Section 5.
Case (6): A pattern which is infrequent in D becomes
semi-frequent in D . For example, in the database D , (be)
is an example of case (6). It is infrequent in D and becomes
semi-frequent in D .
Property: If an infrequent sequence p becomes semi-frequent
in D , all of its prefix subsequences must be either
frequent or semi-frequent. Then at least one of its prefix
subsequences, p, is in F S or SF S.
Solution: Start from its prefix p in F S or SF S and construct
p-projected database, we will discover p .
Formally stated, given a pattern p, we want to discover
whether there is any pattern p with p as prefix where p was
infrequent but is semi-frequent in D .
If the prefix p is in F S or SF S, construct p-projected
database and we will discover p in p-projected database.
Therefore, for any pattern p from infrequent to semi-frequent,
if its prefix is in F S or SF S, p can be discovered.
In our example, for the frequent pattern (b) , we do the
projection on (b) and get a semi-frequent pattern (be) : 2
which was infrequent in D.
We show in Figure 3 the sequential pattern tree T including
F S and SF S after the database updates to D . We can
compare it with Figure 2to see how the database update
affects F S and SF S.
INCSPAN DESIGN AND IMPLEMENTATION
In this section, we formulate the
IncSpan algorithm which
exploits the technique of buffering semi-frequent patterns.
We first present the algorithm outline and then introduce
two optimization techniques.
4.1
IncSpan: Algorithm Outline
Given an original database D, an appended database D ,
a threshold min sup, a buffer ratio , a set of frequent sequences
F S and a set of semi-frequent sequences SF S, we
want to discover the set of frequent sequences F S in D .
Step 1: Scan LDB for single items, as shown in case (4).
Step 2: Check every pattern in F S and SF S in LDB to
adjust the support of those patterns.
Step 2.1: If a pattern becomes frequent, add it to F S .
Then check whether it meets the projection condition. If so,
use it as prefix to project database, as shown in case (5).
Step 2.2: If a pattern is semi-frequent, add it to SF S .
The algorithm is given in Figure 4.
4.2
Reverse Pattern Matching
Reverse pattern matching is a novel optimization technique
. It matches a sequential pattern against a sequence
from the end towards the front. This is used to check sup-Algorithm
. IncSpan(D , min sup, , F S, SF S)
Input: An appended database D , min sup, , frequent
sequences F S in D, semi-frequent sequences SF S
in D.
Output: F S and SF S .
1: F S = , SF S =
2 : Scan LDB for single items;
3: Add new frequent item into F S ;
4: Add new semi-frequent item into SF S ;
5: for each new item i in F S do
6:
PrefixSpan(i, D
|i, min sup, F S , SF S );
7: for every pattern p in F S or SF S do
8:
check sup(p);
9:
if sup(p) = sup
D
(p) + sup(p)
min sup
10:
insert(F S , p);
11:
if sup
LDB
(p)
(1 - )min sup
12:
PrefixSpan(p, D
|p, min sup, F S , SF S );
13:
else
14:
insert(SF S , p);
15: return;
Figure 4: IncSpan algorithm
s s
a
s'
Figure 5: Reverse Pattern Matching
port increase of a sequential pattern in LDB. Since the
appended items are always at the end part of the original
sequence, reverse pattern matching would be more efficient
than projection from the front.
Given an original sequence s, an appended sequence s =
s s
a
, and a sequential pattern p, we want to check whether
the support of p will be increased by appending s
a
to s.
There are two possibilities:
1. If the last item of p is not supported by s
a
, whether p
is supported by s or not, sup(p) is not increased when
s grows to s . Therefore, as long as we do not find the
last item of p in s
a
, we can prune searching.
2. If the last item of p is supported by s
a
, we have to check
whether s supports p. We check this by continuing in
the reverse direction. If p is not supported by s , we can
prune searching and keep sup(p) unchanged. Otherwise
we have to check whether s supports p. If s supports p,
keep sup(p) unchanged; otherwise, increase sup(p) by 1.
Figure 5 shows the reverse pattern matching.
4.3
Shared Projection
Shared Projection is another optimization technique we
exploit. Suppose we have two sequences (a)(b)(c)(d) and
(a)(b)(c)(e) , and we need to project database using each
as prefix. If we make two database projections individually
, we do not take advantage of the similarity between the
two subsequences. Actually the two projected databases up
to subsequence (a)(b)(c) , i.e., D
| (a)(b)(c) are the same.
530
Research Track Poster
From D
| (a)(b)(c) , we do one more step projection for item
d and e respectively. Then we can share the projection for
(a)(b)(c) .
To use shared projection, when we detect some subsequence
that needs projecting database, we do not do the
projection immediately. Instead we label it. After finishing
checking and labelling all the sequences, we do the projection
by traversing the sequential pattern tree. Tree is natural
for this task because the same subsequences are represented
using shared branches.
PERFORMANCE STUDY
A comprehensive performance study has been conducted
in our experiments. We use a synthetic data generator provided
by IBM. The synthetic dataset generator can be re-trieved
from an IBM website, http://www.almaden.ibm.com
/cs/quest. The details about parameter settings can be re-ferred
in [1].
All experiments are done on a PowerEdge 6600 server
with Xeon 2.8 , 4G memory.
The algorithms are written
in C++ and compiled using g++ with -O3 optimization
. We compare three algorithms:
IncSpan, an incremental
mining algorithm
ISM [7], and a non-incremental algorithm
PrefixSpan[8].
Figure 6 (a) shows the running time of three algorithms
when min sup changes on the dataset D10C10T2.5N10, 0.5%
of which has been appended with transactions.
IncSpan is
the fastest, outperforming
PrefixSpan by a factor of 5 or
more, and outperforming
ISM even more. ISM even cannot
finish within a time limit when the support is low.
Figure 6 (b) shows how the three algorithms can be affected
when we vary the percentage of sequences in the
database that have been updated. The dataset we use is
D10C10T2.5N10, min sup=1%. The buffer ratio = 0.8.
The curves show that the time increases as the incremental
portion of the database increases. When the incremental
part exceeds 5% of the database,
PrefixSpan outperforms
IncSpan. This is because if the incremental part is not very
small, the number of patterns brought by it increases, making
a lot overhead for
IncSpan to handle. In this case, mining
from scratch is better. But
IncSpan still outperforms ISM by
a wide margin no matter what the parameter is.
Figure 6 (c) shows the memory usage of
IncSpan and ISM.
The database is D10C10T2.5N10, min sup varies from 0.4%
to 1.5%, buffer ratio = 0.8. Memory usage of
IncSpan increases
linearly as min sup decreases while memory used by
ISM increases dramatically. This is because the number of
sequences in negative border increases sharply as min sup
decreases.
This figure verifies that negative border is a
memory-consuming approach.
Figure 7 (a) shows how the
IncSpan algorithm can be affected
by varying buffer ratio . Dataset is D10C10T2.5N10,
5% of which is appended with new transactions. We use
PrefixSpan as a baseline. As we have discussed before, if we
set very high, we will have fewer pattern in SF S, then the
support update for sequences in SF S on LDB will be more
efficient. However, since we keep less information in SF S,
we may need to spend more time on projecting databases.
In the extreme case = 1, SF S becomes empty. On the
other hand, if we set the very low, we will have a large
number of sequences in SF S, which makes the support update
stage very slow. Experiment shows, when = 0.8, it
achieves the best performance.
Figure 7 (b) shows the performance of
IncSpan to handle
multiple (5 updates in this case) database updates. Each
time the database is updated, we run
PrefixSpan to mine
from scratch. We can see from the figure, as the increments
accumulate, the time for incremental mining increases, but
increase is very small and the incremental mining still outperforms
mining from scratch by a factor of 4 or 5. This
experiment shows that
IncSpan can really handle multiple
database updates without significant performance degrading
.
Figure 7 (c) shows the scalability of the three algorithms
by varying the size of database. The number of sequences in
databases vary from 10,000 to 100,000. 5% of each database
is updated. min sup=0.8%. It shows that all three algorithms
scale well with the database size.
RELATED WORK
In sequential pattern mining, efficient algorithms like
GSP
[10],
SPADE [13], PrefixSpan [8], and SPAM [2] were developed
.
Partition [9] and FUP [3] are two algorithms which promote
partitioning the database, mining local frequent itemsets
, and then consolidating the global frequent itemsets by
cross check. This is based on that a frequent itemset must
be frequent in at least one local database. If a database is
updated with INSERT, we can use this idea to do the incremental
mining. Zhang et al. [14] developed two algorithms
for incremental mining sequential patterns when sequences
are inserted into or deleted from the original database.
Parthasarathy et al. [7] developed an incremental mining
algorithm
ISM by maintaining a sequence lattice of an old
database. The sequence lattice includes all the frequent sequences
and all the sequences in the negative border. However
, there are some disadvantages for using negative border:
(1) The combined number of sequences in the frequent set
and the negative border is huge; (2) The negative border
is generated based on the structural relation between sequences
. However, these sequences do not necessarily have
high support. Therefore, using negative border is very time
and memory consuming.
Masseglia et al. [6] developed another incremental mining
algorithm ISE using candidate generate-and-test approach.
The problem of this algorithm is (1) the candidate set can
be very huge, which makes the test-phase very slow; and
(2) its level-wise working manner requires multiple scans of
the whole database. This is very costly, especially when the
sequences are long.
CONCLUSIONS
In this paper, we investigated the issues for incremental
mining of sequential patterns in large databases and
addressed the inefficiency problem of mining the appended
database from scratch. We proposed an algorithm
IncSpan
by exploring several novel techniques to balance efficiency
and reusability.
IncSpan outperforms the non-incremental
method (using
PrefixSpan) and a previously proposed incremental
mining algorithm
ISM by a wide margin. It is a
promising algorithm to solve practical problems with many
real applications.
There are many interesting research problems related to
IncSpan that should be pursued further. For example, incremental
mining of closed sequential patterns, structured
531
Research Track Poster
0.01
0.1
1
10
100
1000
0.03 0.06
0.1
0.4
0.6
0.8
1
1.5
minsup (%)
Ti
m
e
(s
)
IncSpan
PrefixSpan
ISM
(a) varying min sup
0.01
0.1
1
10
100
0.5
1
2
3
4
5
Percent of growing seq (%)
Ti
m
e
(s
)
IncSpan
PrefixSpan
ISM
(b) varying percentage of updated
sequences
1
10
100
1000
10000
0.4
0.6
0.8
1
1.5
minsup (%)
Mem
o
ry
U
s
a
g
e (
M
B
)
ISM
IncSpan
(c) Memory Usage under varied
min sup
Figure 6: Performance study
0
8
16
24
32
40
0.4
0.5
0.6
0.7
0.8
0.9
1 PrefixSpan
varying buffer ratio u
Ti
m
e
(
s
)
T ime
(a) varying buffer ratio
0
30
60
90
120
1
2
3
4
5
Increment of database
Ti
m
e
(s
)
IncSpan
PrefixSpan
(b)
multiple
increments
of
database
0.01
0.1
1
10
100
1000
10
20
50
80
100
No. of S equences in 1000
Ti
m
e
(
s
)
IncSpan
PrefixSpan
ISM
(c) varying # of sequences (in
1000) in DB
Figure 7: Performance study
patterns in databases and/or data streams are interesting
problems for future research.
REFERENCES
[1] R. Agrawal and R. Srikant. Mining sequential
patterns. In Proc. 1995 Int. Conf. Data Engineering
(ICDE'95), pages 314, March 1995.
[2 ] J. Ayres, J. E. Gehrke, T. Yiu, and J. Flannick.
Sequential pattern mining using bitmaps. In Proc.
2002 ACM SIGKDD Int. Conf. Knowledge Discovery
in Databases (KDD'02), July 2 002 .
[3] D. Cheung, J. Han, V. Ng, and C. Wong. Maintenance
of discovered association rules in large databases: An
incremental update technique. In Proc. of the 12th Int.
Conf. on Data Engineering (ICDE'96), March 1996.
[4] M. Garofalakis, R. Rastogi, and K. Shim. SPIRIT:
Sequential pattern mining with regular expression
constraints. In Proc. 1999 Int. Conf. Very Large Data
Bases (VLDB'99), pages 223234, Sept 1999.
[5] H. Mannila, H. Toivonen, and A. I. Verkamo.
Discovering frequent episodes in sequences. In Proc.
1995 Int. Conf. Knowledge Discovery and Data
Mining (KDD'95), pages 210215, Aug 1995.
[6] F. Masseglia, P. Poncelet, and M. Teisseire.
Incremental mining of sequential patterns in large
databases. Data Knowl. Eng., 46(1):97121, 2003.
[7] S. Parthasarathy, M. Zaki, M. Ogihara, and
S. Dwarkadas. Incremental and interactive sequence
mining. In Proc. of the 8th Int. Conf. on Information
and Knowledge Management (CIKM'99), Nov 1999.
[8] J. Pei, J. Han, B. Mortazavi-Asl, H. Pinto, Q. Chen,
U. Dayal, and M.-C. Hsu. PrefixSpan: Mining
sequential patterns efficiently by prefix-projected
pattern growth. In Proc. 2001 Int. Conf. Data
Engineering (ICDE'01), pages 215224, April 2001.
[9] A. Savasere, E. Omiecinski, and S. Navathe. An
efficient algorithm for mining association rules in large
databases. In Proc. 1995 Int. Conf. Very Large Data
Bases (VLDB'95), Sept 1995.
[10] R. Srikant and R. Agrawal. Mining sequential
patterns: Generalizations and performance
improvements. In Proc. of the 5th Int. Conf. on
Extending Database Technology (EDBT'96), Mar 1996.
[11] J. Wang and J. Han. Bide: Efficient mining of
frequent closed sequences. In Proc. of 2004 Int. Conf.
on Data Engineering (ICDE'04), March 2004.
[12] X. Yan, J. Han, and R. Afshar. CloSpan: Mining
closed sequential patterns in large datasets. In Proc.
2003 SIAM Int.Conf. on Data Mining (SDM'03), May
2003.
[13] M. Zaki. SPADE: An efficient algorithm for mining
frequent sequences. Machine Learning, 40:3160, 2001.
[14] M. Zhang, B. Kao, D. Cheung, and C. Yip. Efficient
algorithms for incremental updates of frequent
sequences. In Proc. of Pacific-Asia Conf. on
Knowledge Discovery and Data Mining (PAKDD'02),
May 2002.
532
Research Track Poster
| database updates;sequence database;shared projection;frequent itemsets;optimization;buffering pattern;sequential pattern;buffering patterns;reverse pattern matching;incremental mining |
109 | Index Structures and Algorithms for Querying Distributed RDF Repositories | A technical infrastructure for storing, querying and managing RDF data is a key element in the current semantic web development. Systems like Jena, Sesame or the ICS-FORTH RDF Suite are widely used for building semantic web applications. Currently, none of these systems supports the integrated querying of distributed RDF repositories. We consider this a major shortcoming since the semantic web is distributed by nature. In this paper we present an architecture for querying distributed RDF repositories by extending the existing Sesame system. We discuss the implications of our architecture and propose an index structure as well as algorithms for query processing and optimization in such a distributed context. | MOTIVATION
The need for handling multiple sources of knowledge and information
is quite obvious in the context of semantic web applications.
First of all we have the duality of schema and information content
where multiple information sources can adhere to the same schema.
Further, the re-use, extension and combination of multiple schema
files is considered to be common practice on the semantic web [7].
Despite the inherently distributed nature of the semantic web, most
current RDF infrastructures (for example [4]) store information locally
as a single knowledge repository, i.e., RDF models from remote
sources are replicated locally and merged into a single model.
Distribution is virtually retained through the use of namespaces to
distinguish between different models. We argue that many interesting
applications on the semantic web would benefit from or even
require an RDF infrastructure that supports real distribution of information
sources that can be accessed from a single point. Beyond
Copyright is held by the author/owner(s).
WWW2004
, May 1722, 2004, New York, New York, USA.
ACM 1-58113-844-X/04/0005.
the argument of conceptual adequacy, there are a number of technical
reasons for real distribution in the spirit of distributed databases:
Freshness:
The commonly used approach of using a local copy
of a remote source suffers from the problem of changing information
. Directly using the remote source frees us from the need of
managing change as we are always working with the original.
Flexibility:
Keeping different sources separate from each other
provides us with a greater flexibility concerning the addition and
removal of sources. In the distributed setting, we only have to adjust
the corresponding system parameters.
In many cases, it will even be unavoidable to adopt a distributed
architecture, for example in scenarios in which the data is not owned
by the person querying it. In this case, it will often not be permitted
to copy the data. More and more information providers, however,
create interfaces that can be used to query the information. The
same holds for cases where the information sources are too large to
just create a single model containing all the information, but they
still can be queried using a special interface (Musicbrainz is an example
of this case). Further, we might want to include sources that
are not available in RDF, but that can be wrapped to produce query
results in RDF format. A typical example is the use of a free-text
index as one source of information. Sometimes there is not even
a fixed model that could be stored in RDF, because the result of a
query is only calculated at runtime (Google, for instance, provides a
programming interface that could be wrapped into an RDF source).
In all these scenarios, we are forced to access external information
sources from an RDF infrastructure without being able to create a
local copy of the information we want to query. On the semantic
web, we almost always want to combine such external sources with
each other and with additional schema knowledge. This confirms
the need to consider an RDF infrastructure that deals with information
sources that are actually distributed across different locations.
In this paper, we address the problem of integrated access to distributed
RDF repositories from a practical point of view. In particular
, starting from a real-life use case where we are considering
a number of distributed sources that contain research results in the
form of publications, we take the existing RDF storage and retrieval
system Sesame and describe how the architecture and the query
processing methods of the system have to be extended in order to
move to a distributed setting.
631
The paper is structured as follows. In Section 2 we present an
extension of the Sesame architecture to multiple, distributed repositories
and discuss basic assumptions and implications of the architecture
. Section 3 presents source index hierarchies as suitable
mechanisms to support the localization of relevant data during
query processing. In Section 4 we introduce a cost model for processing
queries in the distributed architecture, and show its use in
optimizing query execution as a basis for the two-phase optimization
heuristics for join ordering. Section 5 reviews previous work
on index structures for object-oriented data bases. It also summarizes
related work on query optimization particularly focusing on
the join ordering problem. We conclude with a discussion of open
problems and future work.
INTEGRATION ARCHITECTURE
Before discussing the technical aspects of distributed data and
knowledge access, we need to put our work in context by introducing
the specific integration architecture we have to deal with. This
architecture limits the possible ways of accessing and processing
data, and thereby provides a basis for defining some requirements
for our approach. It is important to note that our work is based on
an existing RDF storage and retrieval system, which more or less
predefines the architectural choices we made. In this section, we
describe an extension of the Sesame system [4] to distributed data
sources.
The Sesame architecture is flexible enough to allow a straightforward
extension to a setting where we have to deal with multiple
distributed RDF repositories. In the current setting, queries, expressed
in Sesame's query language SeRQL, are directly passed
from the query engine to an RDF API (SAIL) that abstracts from
the specific implementation of the repository. In the distributed setting
, we have several repositories that can be implemented in different
ways. In order to abstract from this technical heterogeneity,
it is useful to introduce RDF API implementations on top of each
repository, making them accessible in the same way.
The specific problem of a distributed architecture is now that information
relevant to a query might be distributed over the different
sources. This requires to locate relevant information, retrieve it, and
combine the individual answers. For this purpose, we introduce a
new component between the query parser and the actual SAILs the
mediator SAIL (see Figure 1).
In this work, we assume that local repositories are implemented
using database systems that translate queries posed to the RDF API
into SQL queries and use the database functionality to evaluate
them (compare [5]). This assumption has an important influence on
the design of the distributed query processing: the database engines
underlying the individual repositories have the opportunity to perform
local optimization on the SQL queries they pose to the data.
Therefore we do not have to perform optimizations on sub-queries
that are to be forwarded to a single source, because the repository
will deal with it. Our task is rather to determine which part of the
overall query has to be sent to which repository.
In the remainder of this paper, we describe an approach for querying
distributed RDF sources that addresses these requirements implied
by the adopted architecture. We focus our attention on index
structures and algorithms implemented in the mediator SAIL.
Figure 1: Distribution Architecture.
INDEX STRUCTURES
As discussed above, in order to be able to make use of the optimization
mechanisms of the database engines underlying the different
repositories, we have to forward entire queries to the different
repositories. In the case of multiple external models, we can further
speed up the process by only pushing down queries to information
sources we can expect to contain an answer. The ultimate goal
is to push down to a repository exactly that part of a more complex
query for which a repository contains an answer. This part
can range from a single statement template to the entire query. We
can have a situation where a subset of the query result can directly
be extracted from one source, and the rest has to be extracted and
combined from different sources. This situation is illustrated in the
following example.
E
XAMPLE
1. Consider the case where we want to extract information
about research results. This information is scattered across
a variety of data sources containing information about publications
, projects, patents, etc. In order to access these sources in
a uniform way, we use the OntoWeb research ontology. Figure 2
shows parts of this ontology.
Figure 2: Part of the OntoWeb Ontology.
Suppose we now want to ask for the titles of articles by employees
of organizations that have projects in the area "RDF". The
path expression of a corresponding SeRQL query would be the following
1
:
1
For the sake of readability we omit namespaces whenever they do
not play a technical role.
632
{A} title {T};
author {W} affiliation {O}
carriesOut {P} topic {'RDF'}
Now, let's assume that we have three information sources
I
,
P
,
and
Q
.
I
is a publication data base that contains information
about articles, titles, authors and their affiliations.
P
is a project
data base with information about industrial projects, topics, and
organizations. Finally,
Q
is a research portal that contains all of
the above information for academic research.
If we want to answer the query above completely we need all
three information sources. By pushing down the entire query to
Q
we get results for academic research. In order to also retrieve the
information for industrial research, we need to split up the query,
push the fragment
{A} title {T};
author {W} affiliation {O}
to
I
, the fragment
{O} carriesOut {P} topic {'RDF'}
to
P
, and join the result based on the identity of the organization
.
The example illustrates the need for sophisticated indexing structures
for deciding which part of a query to direct to which information
source. On the one hand we need to index complex query
patterns in order to be able to push down larger queries to a source;
on the other hand we also need to be able to identify sub-queries
needed for retrieving partial results from individual sources.
In order to solve this problem we build upon existing work on
indexing complex object models using join indices [14]. The idea
of join indices is to create additional database tables that explic-itly
contain the result of a join over a specific property. At runtime,
rather than computing a join, the system just accesses the join index
relation which is less computationally expensive. The idea of join
indices has been adapted to deal with complex object models. The
resulting index structure is a join index hierarchy [21]. The most
general element in the hierarchy is an index table for elements connected
by a certain path
p
HXXn I
of length
n. Every following level
contains all the paths of a particular length from 2 paths of length
n I at the second level of the hierarchy to n paths of length 1 at the
bottom of the hierarchy. In the following, we show how the notion
of join index hierarchies can be adapted to deal with the problem of
determining information sources that contain results for a particular
sub-query.
3.1
Source Index Hierarchies
The majority of work in the area of object oriented databases is
focused on indexing schema-based paths in complex object models.
We can make use of this work by relating it to the graph-based interpretation
of RDF models. More specifically, every RDF model
can be seen as a graph where nodes correspond to resources and
edges to properties linking these resources. The result of a query to
such a model is a set of subgraphs corresponding to a path expression
. While a path expression does not necessarily describe a single
path, it describes a tree that can be created by joining a set of paths.
Making use of this fact, we first decompose the path expression
into a set of expressions describing simple paths, then forward the
simpler path expressions to sources that contain the corresponding
information using a path-based index structure, and join retrieved
answers to create the result.
The problem with using path indices to select information sources
is the fact that the information that makes up a path might be distributed
across different information sources (compare Example 1).
We therefore have to use an index structure that also contains information
about sub-paths without loosing the advantage of indexing
complete paths. An index structure that combines these two characteristics
is the join index hierarchy proposed in [21]. We therefore
take their approach as a basis for defining a source index hierarchy.
D
EFINITION
1
(S
CHEMA
P
ATH
). Let
q a hY iY vY sY tY li
be a labelled graph of an RDF model where
is a set of nodes, i
a set of edges,
v a set of labels, sY t X i 3 and l X i 3 v.
For every
e P i, we have s@eA a r
I
Y t@eA a r
P
and
l@eA a l
e
if
and only if the model contains the triple
@r
I
Y l
e
Y r
P
A. A path in G is
a list of edges
e
H
Y Y e
n I
such that
t@e
i
A a s@e
iCI
A for all i a
HY Y n P. Let p a e
H
Y Y e
n I
be a path, the corresponding
schema path is the list of labels
l
H
Y Y l
n I
such that
l
i
a l@e
i
A.
The definition establishes the notion of a path for RDF models.
We can now use path-based index structures and adapt them to the
task of locating path instances in different RDF models. The basic
structure we use for this purpose is an index table of sources that
contain instances of a certain path.
D
EFINITION
2
(S
OURCE
I
NDEX
). Let
p be a schema path; a
source index for
p is a set of pairs @s
k
Y n
k
A where s
k
is an information
source (in particular, an RDF model) and the graph of
s
k
contains exactly
n
k
paths with schema path
p and n
k
b H.
A source index can be used to determine information sources that
contain instances of a particular schema path. If our query contains
the path
p, the corresponding source index provides us with a list of
information sources we have to forward the query to in order to get
results. The information about the number of instance paths can be
used to estimate communication costs and will be used for join ordering
(see Section 4). So far the index satisfies the requirement of
being able to list complete paths and push down the corresponding
queries to external sources. In order to be able to retrieve information
that is distributed across different sources, we have to extend
the structure based on the idea of a hierarchy of indices for arbitrary
sub-paths. The corresponding structure is defined as follows.
D
EFINITION
3
(S
OURCE
I
NDEX
H
IERARCHY
). Let
p a l
H
Y Y l
n I
be a schema path. A source index hierarchy for
p is an n-tuple h
n
Y Y
I
i where
n
is a source index for
p
i
is the set of all source indices for sub-paths of
p with
length
i that have at least one entry.
The most suitable way to represent such index structure is a hierarchy
, where the source index of the indexed path is the root element
. The hierarchy is formed in such a way that the subpart rooted
at the source index for a path
p always contains source indices for
all sub-paths of
p. This property will later be used in the query
answering algorithm. Forming a lattice of source indices, a source
index hierarchy contains information about every possible schema
sub-path. Therefore we can locate all fragments of paths that might
be combined into a query result. At the same time, we can first
concentrate on complete path instances and successively investigate
smaller fragments using the knowledge about the existence of
longer paths. We illustrate this principle in the following example.
633
E
XAMPLE
2. Let us reconsider the situation in Example 1. The
schema path we want to index is given by the list (author, affiliation
, carriesOut, topic). The source index hierarchy for this path
therefore contains source indices for the paths
p
HXXQ
: (author, affiliation, carriesOut, topic)
p
HXXP
: (author, affiliation, carriesOut),
p
IXXQ
: (affiliation, carriesOut, topic)
p
HXXI
:(author, affiliation),
p
IXXP
:(affiliation, carriesOut),
p
PXXQ
:(carriesOut, topic)
p
H
:(author),
p
I
:(affiliation),
p
P
:(carriesOut),
p
Q
(topic)
Starting from the longest path, we compare our query expression
with the index (see Figure 3 for an example of index contents). We
immediately get the information that
Q
contains results. Turning
to sub-paths, we also find out that
I
contains results for the sub-path
(author, affiliation) and
P
for the sub-path (carriesOut, topic)
that we can join in order to compute results, because together both
sub-paths make up the path we are looking for.
The source indices also contain information about the fact that
Q
contains results for all sub-paths of our target path. We still
have to take this information into account, because in combination
with fragments from other sources we might get additional results.
However, we do not have to consider joining sub-paths from the
same source, because these results are already covered by longer
paths. In the example we see that
P
will return far less results than
I
(because there are less projects than publications). We can use
this information to optimize the process of joining results.
A key issue connected with indexing information sources is the
trade-off between required storage space and computational properties
of index-based query processing. Compared to index structures
used to speed up query processing within an information source, a
source index is relatively small as it does not encode information
about individual elements in a source. Therefore, the size of the index
is independent of the size of the indexed information sources.
The relevant parameters in our case are the number of sources
s and
the lengths of the schema path
n. More specifically, in the worst
case a source index hierarchy contains source indices for every sub-path
of the indexed schema path. As the number of all sub-path of
a path is
n
iaI
i, the worst-case
2
space complexity of a source index
hierarchy is
y@s n
P
A. We conclude that the length of the indexed
path is the significant parameter here.
3.2
Query Answering Algorithm
Using the notion of a source index hierarchy, we can now define
a basic algorithm for answering queries using multiple sources of
information. The task of this algorithm is to determine all possible
combinations of sub-paths of the given query path. For each of
these combinations, it then has to determine the sources containing
results for the path fragments, retrieve these results, and join them
into a result for the complete path. The main task is to guarantee
that we indeed check all possible combinations of sub-paths for the
2
It is the case where all sources contain results for the complete
schema path.
query path. The easiest way of guaranteeing this is to use a simple
tree-recursion algorithm that retrieves results for the complete path,
then splits the original path, and joins the results of recursive calls
for the sub-paths. In order to capture all possible splits this has to
be done for every possible split point in the original path. The corresponding
semi-formal algorithm is given below (Algorithm 1).
Algorithm 1
Compute Answers.
Require:
A schema path
p a l
H
Y Y l
n I
Require:
A source index hierarchy
h a @
n
Y Y
I
A for p
for all
sources
s
k
in source index
n
do
ANSWERS
:= instances of schema path
p in source s
k
RESULT
:=
result nswers
end for
if
n ! P then
for all
i a I n I do
p
HXXi I
:=
l
H
Y l
i I
p
iXXn I
:=
l
i
Y l
n I
h
HXXi I
:= Sub-hierarchy of
h rooted at the source index for
p
HXXi I
h
iXXn I
:= Sub-hierarchy of
h rooted at the source index for
p
iXXn I
res
I
:=
gomputeenswers@p
HXXi I
Y h
HXXi I
A
res
P
:=
gomputeenswers@p
iXXn I
Y h
iXXn I
A
RESULT
:=
result join@res
I
Y res
P
A
end for
end if
return
result
Note that Algorithm 1 is far from being optimal with respect to
runtime performance. The straightforward recursion scheme does
not take specific actions to prevent unnecessary work and it neither
selects an optimal order for joining sub-paths. We can improve this
situation by using knowledge about the information in the different
sources and performing query optimization.
QUERY OPTIMIZATION
In the previous section we described a light-weight index structure
for distributed RDF querying. Its main task is to index schema
paths w.r.t. underlying sources that contain them. Compared to
instance-level indexing, our approach does not require creating and
maintaining oversized indices since there are far fewer sources than
there are instances. Instance indexing would not scale in the web
environment and as mentioned above in many cases it would not
even be applicable, e.g., when sources do not allow replication
of their data (which is what instance indices essentially do). The
downside of our approach, however, is that query answering without
the index support at the instance level is much more computationally
intensive. Moreover, in the context of semantic web portal
applications the queries are not man-entered anymore but rather
generated by a portal's front-end (triggered by the user) and often
exceed the size
3
which can be easily computed by using brute
force. Therefore we focus in this section on query optimization as
an important part of a distributed RDF query system. We try to
avoid re-inventing the wheel and once again seek for inspiration in
the database field, making it applicable by "relationizing" the RDF
model.
Each single schema path
p
i
of length 1 (also called 1-pth
) can
be perceived as a relation with two attributes: the source vertex
3
Especially, the length of the path expression.
634
Figure 3: Source index hierarchy for the given query path.
s@p
i
A and the target vertex t@p
i
A. A schema path of length more
than 1 is modelled as a set of relations joined together by the identity
of the adjacent vertices, essentially representing a chain query
of joins as defined in Definition 4. This relational view over an
RDF graph offers the possibility to re-use the extensive research on
join optimization in databases, e.g. [1, 8, 9, 17, 20].
Taking into account the (distributed) RDF context of the join ordering
problem there are several specifics to note when devising
a good query plan. As in distributed databases, communication
costs significantly contribute to the overall cost of a query plan.
Since in our case the distribution is assumed to be realized via an
IP network with a variable bandwidth, the communications costs
are likely to contribute substantially to the overall processing costs,
which makes the minimization of data transmission across the network
very important. Unless the underlying sources provide join
capabilities, the data transmission cannot be largely reduced: all
(selected) bits of data from the sources are joined by the mediator
and hence must be transmitted via the network.
There may exist different dependencies (both structural and ex-tensional
) on the way the data is distributed. If the information
about such dependencies is available, it essentially enables the optimizer
to prune join combinations which cannot yield any results.
The existence of such dependencies can be (to some extent) com-puted/discovered
prior to querying, during the initial integration
phase. Human insight is, however, often needed in order to avoid
false dependency conclusions, which could potentially influence
the completeness of query answering.
The performance and data statistics are both necessary for the
optimizer to make the right decision. In general, the more the optimizer
knows about the underlying sources and data, the better
optimized the query plan is. However, taking into account the autonomy
of the sources, the necessary statistics do not have to be
always available. We design our mediator to cope with incomplete
statistical information in such a way that the missing parameters are
estimated as being worse than those that are known (pessimistic approach
). Naturally, the performance of the optimizer is then lower
but it increases steadily when the estimations are made more realistic
based on the actual response from the underlying sources; this
is also known as optimizer calibration.
As indicated above, the computational capabilities of the underlying
sources may vary considerably. We distinguish between those
sources that can only retrieve the selected local data (pull up strategy
) and those that can perform joins of their local and incoming
external data (push down strategy), thus offering computational services
that could be used to achieve both a higher degree of parallelism
and smaller data transmission over the network, e.g., by applying
semi-join reductions [1]. At present, however, most sources
are capable only of selecting the desired data within their extent,
i.e., they do not offer the join capability. Therefore, further we focus
mainly on local optimization at the mediator's side.
For this purpose we need to perceive an RDF model as a set of
relations on which we can apply optimization results from the area
of relational databases. In this context the problem of join ordering
arises, when we want to compute the results for schema paths from
partial results obtained from different sources. Creating the result
for a schema corresponds to the problem of computing the result of
a chain query as defined below:
D
EFINITION
4
(C
HAIN
Q
UERY
). Let
p be a schema path composed
from the 1-paths
p
I
Y Y p
n
. The chain query of
p is the n-join
p
I
FG
t@p
I
Aas@p
P
A
p
P
FG
t@p
P
Aas@p
Q
A
p
Q
FG p
n
, where
s@p
i
A
and
t@p
i
A are returning an identity of a source and target node,
respectively. As the join condition and attributes follow the same
pattern for all joins in the chain query, we omit them whenever they
are clear from the context.
In other words, to follow a path
p of length 2 means performing
a join between the two paths of length 1 which
p is composed
from. The problem of join optimization is to determine the right
order in which the joins should be computed, such that the overall
response time for computing the path instances is minimized.
4
Note that a chain query in Definition 4 does not include explicit
joins, i.e., those specified in the
here clause, or by assigning the
same variable names along the path expression. When we append
these explicit joins, the shape of the query usually changes from a
linear chain to a query graph containing a circle or a star, making
the join ordering problem NP-hard [15].
4
In case the sources offer also join capabilities the problem is not
only in which order but also where the joins should take place.
635
4.1
Space Complexity
Disregarding the solutions obtained by the commutativity of joins,
each query execution plan can be associated with a sequence of
numbers that represents the order in which the relations are joined.
We refer to this sequence as footprint of the execution plan.
E
XAMPLE
3. For brevity reasons, assume the following name
substitutions in the model introduced in Example 1: the concept
names Article, Employee, Organization, Project, ResearchTopic
become a, b, c, d, e, respectively; the property names author, affiliation
, carriesOut, topic
are substituted with 1, 2, 3, 4, respectively.
Figure 4 presents two possible execution plans and their footprints.
Figure 4: Two possible query executions and their footprints.
If also the order of the join operands matters, i.e., the commutativity
law is considered, the sequence of the operands of each join
is recorded in the footprint as well. The solution space consists of
query plans (their footprints) which can be generated. We distinguish
two cases: first the larger solution space of bushy trees and
then its subset consisting of right-deep trees.
If we allow for an arbitrary order of joins the resulting query
plans are so-called bushy trees where the operands of a join can
be both a base relation
5
or a result of a previous join. For a query
with
n joins there are n3 possibilities of different query execution
plans if we disregard the commutativity of joins and cross products
. Note that in the case of bushy trees, there might be several
footprints associated with one query tree. For instance, the bushy
tree in Example 3 can be evaluated in different order yielding two
more footprints: (2, 4, 1, 3) or (4, 2, 1, 3). In our current approach,
these footprints would be equivalent w.r.t. the cost they represent.
However, treating them independently allows us to consider in the
future also semi-join optimization [1] where their cost might differ
considerably.
If the commutativity of join is taken into account, there are
Pn
n
n3
P
n
different possibilities of ordering joins and their individual constituents
[22]. However, in case of memory-resident databases where
all data fits in main memory, the possibilities generated by the commutativity
law can be for some join methods neglected as they
mainly play a role in the cost model minimizing disk-memory operations
; we discuss this issue further in Subsection 4.2. We adopt the
memory-only strategy as in our context there are always only two
5
A base relation is that part of the path which can be retrieved directly
from one source.
attributes per relation, both of them being URI references which,
when the namespace prefix is stored separately, yield a very small
size. Of course, the assumption we make here is that the Sesame
server is equipped with a sufficient amount of memory to accommodate
all intermediate tuples of relations appearing in the query.
A special case of a general execution plan is a so-called right-deep
tree which has the left-hand join operands consisting only of
base relations. For a footprint that starts with the
r-th join there are
n
r
possibilities of finishing the joining sequence. Thus there are
in total
n I
iaH
n I
i
a P
n I
possibilities of different query execution
plans.
6
. In this specially shaped query tree exists an execution
pipeline of length
n I that allows both for easier parallelizing and
for shortening the response time [8] This property is very useful in
the context of the WWW where many applications are built in a
producer-consumer paradigm.
4.2
Cost Model
The main goal of query optimization is to reduce the computational
cost of processing the query both in terms of the transmission
cost and the cost of performing join operations on the retrieved result
fragments. In order to determine a good strategy for processing
a query, we have to be able to exactly determine the cost of a
query execution plan and to compare it to costs of alternative plans.
For this purpose, we capture the computational costs of alternative
query plans in a cost model that provides the basis for the optimization
algorithm that is discussed later.
As mentioned earlier, we adopt the memory-resident paradigm,
and the cost we are trying to minimize is equivalent to minimizing
the total execution time. There are two main factors that influence
the resulting cost in our model. First is the cost of data transmission
to the mediator, and second is the data processing cost.
D
EFINITION
5
(T
RANSMISSION
C
OST
). The transmission
cost of path instances of the schema path
p from a source to
the mediator is modelled as
g
p
a ginit
C jpj vngth
p
ksk
g
where
ginit
represents the cost of initiating the
data transmission,
jpj denotes the cardinality, vngth
p
stands for
the length of the schema path
p, ksk
is the size of a URI at
the source X
7
and
g
represents transmission cost per data unit
from
to the mediator.
Since we apply all reducing operations (e.g., selections and projections
) prior to the data transmission phase, the data processing
mainly consists of join costs. The cost of a join operation is influ-enced
by the cardinality of the two operands and the join-method
which is utilized. As we already pointed out, there are no instance
indices at the mediator side that would allow us to use some join
"shortcuts". In the following we consider two join methods: a
nested loop join and a hash join both without additional indexing
support.
D
EFINITION
6
(N
ESTED LOOP JOIN COST
). The processing
cost of a nested loop join of two relations
pY r is defined as xtg
pYr
a jpjjrju@pY rA, where jxj denotes the cardinality of the relation
x and u@pY rA represents the cost of the identity comparison.
6
The number corresponds to a sum of the
n I-th line in the Pascal
triangle.
7
Different sources may model URIs differently, however, we assume
that at the mediator all URIs are represented in the same way.
636
Note that the nested loop join allows for a more sophisticated
definition of object equality than a common URI comparison. In
particular, if necessary, the basic URI comparison can be complemented
by (recursive) comparisons of property values or mapping
look-ups. This offers room to address the issue of URI diversity
also known as the designation problem, when two different URIs
refer to the same real-life object.
D
EFINITION
7
(H
ASH JOIN COST
). The processing cost of a
hash join of two relations
pY r is defined as rtg
pYr
a s jpj C
jrj f, where jxj denotes the cardinality of the relation x, s
represents the cost of inserting a path instance in the hash table (the
building factor),
models the cost of retrieving a bucket from the
hash table, and
f stands for the average number of path instances
in the bucket.
Unlike the previous join method, the hash join algorithm assumes
that the object equality can be determined by a simple URI
comparison, in other words that the URI references are consistent
across the sources. Another difference is that in the case of the
nested loop join for in-memory relations the join commutativity can
be neglected, as the query plan produced from another query plan
by the commutativity law will have exactly the same cost. However
, in the case of the hash join method the order of operands
influences the cost and thus the solution space must also include
those solutions produced by the commutativity law.
D
EFINITION
8
(Q
UERY PLAN COST
). The overall cost of a
query plan
consists of the sum of all communication costs and all
join processing costs of the query tree.
g
a
n
iaI
g
p
i
C g
,
where
g
represents the join processing cost of the query tree
and it is computed as a sum of recurrent applications of the formula
in Definition 6 or 7 depending on which join method is utilized. To
compute the cardinality of non-base join arguments, a join selectivity
is used. The join selectivity
' is defined as a ratio between
the tuples retained by the join and those created by the Cartesian
product:
' a
jpFGrj
jprj
.
As it is not possible to determine the precise join selectivity before
the query is evaluated,
' for each sub-path join is assumed to
be estimated and available in the source index hierarchy. After the
evaluation of each query initial
' estimates are improved and made
more realistic.
4.3
Heuristics for join ordering
While the join ordering problem in the context of a linear/chain
query can be solved in a polynomial time [12], we have to take into
account the more complex problem when also the explicit joins are
involved which is proven to be NP-hard [15]. It is apparent that
evaluating all possible join strategies for achieving the global optimum
becomes quickly unfeasible for a larger
n. In these cases we
have to rely on heuristics that compute a "good-enough" solution
given the constraints. In fact, this is a common approach for op-timizers
in interactive systems. There, optimization is often about
avoiding bad query plans in very short time, rather than devoting
a lot of the precious CPU time to find the optimal plan, especially,
when it is not so uncommon that the optimal plan improves the
heuristically obtained solutions only marginally.
Heuristics for the join ordering problem have been studied exten-sively
in the database community. In this work we adopt the results
of comparing different join ordering heuristics from [17]. Inspired
from this survey, we chose to apply the two-phase optimization
consisting of the iterative improvement (II) algorithm followed by
the simulated annealing (SA) algorithm [20]. This combination
performs very well on the class of queries we are interested in,
both in the bushy and the right-deep tree solution space, and degrades
gracefully under time constrains.
The II algorithm is a simple greedy heuristics which accepts any
improvement on the cost function. The II randomly generates several
initial solutions, taking them as starting points for a walk in
the chosen solution space. The actual traversal is performed by applying
a series of random moves from a predefined set. The cost
function is evaluated for every such move, remembering the best
solution so far. The main idea of this phase is to descent rapidly
into several local minima assuring aforementioned graceful degradation
. For each of the sub-optimal solutions, the second phase
of the SA algorithm is applied. The task of the SA phase is to
explore the "neighborhood" of a prosperous solution more thor-oughly
, hopefully lowering the cost.
Algorithm 2
Simulated annealing algorithm
Require:
start solution
solution
Require:
start temperature
s empr
solution := solution
estolution := solution
tempr := s empr
ost := gost@estolutionA
mingost := ost
repeat
repeat
newolution := NEW(solution)
newgost := gost@newolutionA
if
newgost ost then
solution := newolution
ost := newCost
else if
e
@newgost ostA
tempr
! exh@HXXIA then
solution := newolution
ost := newCost
end if
if
ost ` mingost then
estolution := solution
mingost := ost
end if
until
equilibrium reached
DECREASE(
tempr)
until
frozen
return
estolution
The pseudo-code of the SA phase is presented in Algorithm 2.
It takes a starting point/solution from the II phase, and similarly to
II performs random moves from a predefined set accepting all cost
improvements. However, unlike the II, the SA algorithm can accept
with a certain probability also those moves that result in a solution
with a higher cost than the current best solution. The probability of
such acceptance depends on the temperature of the system and the
cost difference. The idea is that at the beginning the system is hot
and accepts easier the moves yielding even solutions with higher
costs. However, as the temperature decreases the system is becoming
more stable, strongly preferring those solutions with lower
costs. The SA algorithm improves on the II heuristics by making
the stop condition less prone to get trapped in a local minimum; SA
stops when the temperature drops below a certain threshold or if the
best solution so far was not improved in a number of consecutive
637
temperature decrements, the system is considered frozen. There
are two sets of moves: one for the bushy solution space and one for
the right-deep solution space; for details we refer the reader to [20].
Figure 5: Acceptance probability with respect to the temperature
and the cost difference.
Figure 5 shows the acceptance probability dependency in the SA
phase computed for the range of parameters that we used in our experiments
. As we adopted the two-phase algorithm our simulations
were able to reproduce the trends in results presented in [17]; due
to the lack of space we omit the detail performance analysis and the
interested reader is referred to the aforementioned survey.
RELATED WORK
In this paper we focused mainly on basic techniques such as indexing
and join ordering. Relevant related work is described in the
remainder of this section. More advanced techniques such as site
selection and dynamic data placement are not considered, because
they are not supported by the current architecture of the system.
We also do not consider techniques that involve view-based query
answering techniques [6] because we are currently not considering
the problem of integrating heterogeneous data.
5.1
Index Structures for Object Models
There has been quite a lot of research on indexing object oriented
databases. The aim of this work was to speed up querying and navigation
in large object databases. The underlying idea of many existing
approaches is to regard an object base as a directed graph,
where objects correspond to nodes, and object properties to links
[16]. This view directly corresponds to RDF data, that is often also
regarded as a directed graph. Indices over such graph structures
now describe paths in the graph based on a certain pattern normally
provided by the schema. Different indexing techniques vary on the
kind of path patterns they describe and on the structure of the index.
Simple index structures only refer to a single property and organize
objects according to the value of that property. Nested indices and
path indices cover a complete path in the model that might contain
a number of objects and properties [2]. In RDF as well as in object
oriented databases, the inheritance relation plays a special role as it
is connected with a predefined semantics. Special index structures
have been developed to speed up queries about such hierarchies and
have recently been rediscovered for indexing RDF data [5]. In the
area of object-oriented database systems, these two kinds of indexing
structures have been combined resulting in the so-called nested
inheritance indices [3] and generalized nested inheritance indices
[16]. These index structures directly represent implications of inheritance
reasoning, an approach that is equivalent to indexing the
deductive closure of the model.
5.2
Query Optimization
There is a long tradition of work on distributed databases in general
[13] and distributed query processing in particular [10]. The
dominant problem is the generation of an optimal query plan that
reduces execution costs as much as possible while guaranteeing
completeness of the result. As described by Kossmann in [10],
the choice of techniques for query plan generation depends on the
architecture of the distributed system. He discusses basic techniques
as well as methods for client-server architectures and for
heterogeneous databases. Due to our architectural limitations (e.g.,
limited source capabilities) we focused on join-ordering optimization
which can be performed in a centralized manner by the mediator
. While some restricted cases of this problem can be solved in
a polynomial time [12, 11], the general problem of finding an optimal
plan for evaluating join queries has been proven to be NP-hard
[15]. The approaches to tackle this problem can be split into several
categories [17]: deterministic algorithms, randomized algorithms,
and genetic algorithms. Deterministic algorithms often use techniques
of dynamic programming (e.g. [12]), however, due to the
complexity of the problem they introduce simplifications, which
render them as heuristics. Randomized algorithms (e.g. [20, 19]),
perform a random walk in the solution space according to certain
rules. After the stop-condition is fulfilled, the best solution found
so far is declared as the result. Genetic algorithms (e.g. [18]) perceive
the problem as biological evolution; they usually start with a
random population (set of solutions) and generate offspring by applying
a crossover and mutation. Subsequently, the selection phase
eliminates weak members of the new population.
LIMITATIONS AND FUTURE WORK
The work reported in this paper can be seen as a very first step
towards a solution for the problem of distributed processing of RDF
queries. We motivated the overall problem and proposed some data
structures and algorithms that deal with the most fundamental problems
of distributed querying in a predefined setting. We identified
a number of limitations of the current proposal with respect to the
generality of the approach and assumptions made. These limitations
also set the agenda for future work to be done on distributed
RDF querying and its support in Sesame.
Implementation
Currently, our work on distributed query processing
is of a purely theoretical nature. The design and evaluation
of the methods described are based on previous work reported in
the literature and on worst-case complexity estimations. The next
step is to come up with a test implementation of a distributed RDF
storage system. The implementation will follow the architecture
introduced in the beginning of the paper and will be built on top of
the Sesame storage and retrieval engine. The implementation will
provide the basis for a more practical evaluation of our approach
and will allow us to make assertions about the real system behavior
in the presence of different data sets and different ways they are
distributed. Such a practical evaluation will be the basis for further
optimization of the methods.
Schema-Awareness
One of the limitations of the approach described
in this paper concerns schema aware querying in a distributed
setting. Even if every single repository is capable of computing
the deductive closure of the model it contains, the overall
638
result is not necessarily complete, as schema information in one
repository can have an influence on information in other repositories
. This information could lead to additional conclusions if taken
into account during query processing. In order to be able to deal
with this situation, we need to do some additional reasoning within
the mediator in order to detect and process dependencies between
the different models.
Object Identity
One of the basic operations of query processing
is the computation of joins of relations that correspond to individual
properties. The basic assumption we make at this point is that we
are able to uniquely determine object identity. Identity is essential
because it is the main criterion that determines whether to connect
two paths or not. From a pragmatic point of view, the URI of an
RDF resource provides us with an identity criterion. While this
may be the case in a single repository, it is not clear at all whether
we can make this assumption in a distributed setting as different
repositories can contain information about the same real world object
(e.g., a paper) and assign different URIs to it. To deal with this
situation we have to develop heuristics capable of deciding whether
two resources describe the same real world object.
Query Model
In order to be able to design efficient index structures
we restricted ourselves to path queries as a query model that
is directly supported. We argued above that tree-shaped queries
can be easily split into a number of path queries that have to be
joined afterwards. Nevertheless, this simplification does not apply
to the optimization part which is capable of processing also different
query shapes. An important aspect of future work is to extend
our indexing approach to more expressive query models that also
include tree and graph shaped queries which can be found in existing
RDF query languages. It remains to be seen whether the same
kind of structures and algorithms can be used for more complex
queries or whether we have to find alternatives.
Architecture
The starting point of our investigation was a particular
architecture, namely a distributed repository where the data
is accessed at a single point but stored in different repositories. We
further made the assumption that these repositories are read-only,
i.e., they only provide answers to path queries that they are known
to contain some information about.
An interesting question is how more flexible architectures can
be supported. We think of architectures where information is accessed
from multiple points and repositories are able to forward
queries. Further we can imagine grid-based architectures where
components can perform local query processing on data received
from other repositories. A prominent example of such more flexible
architectures are peer-to-peer systems. This would also bring
a new potential for optimization as peers may collaborate on query
evaluation which in turn may help in reducing both the communication
and processing costs.
REFERENCES
[1] P. Bernstein and D. Chiu. Using semi-joins to solve
relational queries. Journal of the ACM, 28:2540, 1981.
[2] E. Bertino. An indexing technique for object-oriented
databases. In Proceedings of the Seventh International
Conference on Data Engineering, April 8-12, 1991, Kobe,
Japan, pages 160170. IEEE Computer Society, 1991.
[3] E. Bertino and P. Foscoli. Index organizations for
object-oriented database systems. TKDE, 7(2):193209,
1995.
[4] J. Broekstra, A. Kampman, and F. van Harmelen. Sesame: A
generic architecture for storing and querying rdf and rdf
schema. In The Semantic Web - ISWC 2002, volume 2342 of
LNCS, pages 5468. Springer, 2002.
[5] V. Christophides, D. Plexousakisa, M. Scholl, and
S. Tourtounis. On labeling schemes for the semantic web. In
Proceedings of the 13th World Wide Web Conference, pages
544555, 2003.
[6] A. Halevy. Answering queries using views - a survey. The
VLDB Journal, 10(4):270294, 2001.
[7] J. Hendler. Agents and the semantic web. IEEE Intelligent
Systems, (2), 2001.
[8] H. Hsiao, M. Chen, and P. Yu. Parallel execution of hash
joins in parallel databases. IEEE Transactions on Parallel
and Distributed Systems, 8:872883, 1997.
[9] Y. Ioannidis and E. Wong. Query optimization by simulated
annealing. In ACM SIGMOD International Conference on
Management of Data, pages 922. ACM:Press, 1987.
[10] D. Kossmann. The state of the art in distributed query
processing. ACM Computing Surveys, 32(4):422469, 2000.
[11] G. Moerkotte. Constructing optimal bushy trees possibly
containing cross products for order preserving joins is in p,
tr-03-012. Technical report, University of Mannheim, 2003.
[12] K. Ono and G. M. Lohman. Measuring the complexity of
join enumeration in query optimization. In 16th International
Conference on Very Large Data Bases, pages 314325.
Morgan Kaufmann, 1990.
[13] M. Ozsu and P. Valduriez. Principles of Distributed
Database Systems. Prentice Hall, 1991.
[14] D. Rotem. Spatial join indices. In Proceedings of
International Conference on Data Engineering, 1991.
[15] W. Scheufele and G. Moerkotte. Constructing optimal bushy
processing trees for join queries is np-hard, tr-96-011.
Technical report, University of Mannheim, 1996.
[16] B. Shidlovsky and E. Bertino. A graph-theoretic approach to
indexing in object-oriented databases. In S. Y. W. Su, editor,
Proceedings of the Twelfth International Conference on Data
Engineering, February 26 - March 1, 1996, New Orleans,
Louisiana, pages 230237. IEEE Computer Society, 1996.
[17] M. Steinbrunn, G. Moerkotte, and A. Kemper. Heuristic and
randomized optimization for join ordering problem. The
VLDB Journal, 6:191208, 1997.
[18] M. Stillger and M. Spiliopoulou. Genetic programming in
database query optimization. In J. R. Koza, D. E. Goldberg,
D. B. Fogel, and R. L. Riolo, editors, Genetic Programming
1996: Proceedings of the First Annual Conference, pages
388393. MIT Press, 1996.
[19] A. Swami. Optimization of large join queries: combining
heuristics and combinatorial techniques. In ACM SIGMOD
International Conference on Management of Data, pages
367376. ACM:Press, 1989.
[20] A. Swami and A. Gupta. Optimization of large join queries.
In ACM SIGMOD International Conference on Management
of Data, pages 817. ACM:Press, 1988.
[21] Z. Xie and J. Han. Join index hierarchies for supporting
efficient navigations in object-oriented databases. In
Proceedings of the International Conference on Very Large
Data Bases, pages 522533, 1994.
[22] C. Yu and W. Meng. Principles of Database Query
Processing for Advanced Applications. Morgan Kaufmann
Publishers, 1998.
639
| index structure;external sources;query optimization;distributed architecture;repositories;RDF;infrastructure;RDF Querying;Optimization;Index Structures;semantic web;join ordering problem |
11 | A Functional Correspondence between Evaluators and Abstract Machines | We bridge the gap between functional evaluators and abstract machines for the λ-calculus, using closure conversion, transformation into continuation-passing style, and defunctionalization. We illustrate this approach by deriving Krivine's abstract machine from an ordinary call-by-name evaluator and by deriving an ordinary call-by-value evaluator from Felleisen et al.'s CEK machine. The first derivation is strikingly simpler than what can be found in the literature. The second one is new. Together, they show that Krivine's abstract machine and the CEK machine correspond to the call-by-name and call-by-value facets of an ordinary evaluator for the λ-calculus. We then reveal the denotational content of Hannan and Miller's CLS machine and of Landin's SECD machine. We formally compare the corresponding evaluators and we illustrate some degrees of freedom in the design spaces of evaluators and of abstract machines for the λ-calculus with computational effects. Finally, we consider the Categorical Abstract Machine and the extent to which it is more of a virtual machine than an abstract machine | Introduction and related work
In Hannan and Miller's words [23, Section 7], there are fundamental
differences between denotational definitions and definitions of
abstract machines. While a functional programmer tends to be
familiar with denotational definitions [36], he typically wonders
about the following issues:
Design:
How does one design an abstract machine? How were
existing abstract machines, starting with Landin's SECD machine
, designed? How does one make variants of an existing
abstract machine? How does one extend an existing abstract
machine to a bigger source language? How does one go about
designing a new abstract machine? How does one relate two
abstract machines?
Correctness:
How does one prove the correctness of an abstract
machine?
Assuming it implements a reduction strategy,
should one prove that each of its transitions implements a part
of this strategy? Or should one characterize it in reference to
a given evaluator, or to another abstract machine?
A variety of answers to these questions can be found in the literature
. Landin invented the SECD machine as an implementation
model for functional languages [26], and Plotkin proved its
correctness in connection with an evaluation function [30, Section
2]. Krivine discovered an abstract machine from a logical
standpoint [25], and Cregut proved its correctness in reference to
a reduction strategy; he also generalized it from weak to strong
normalization [7]. Curien discovered the Categorical Abstract Machine
from a categorical standpoint [6, 8]. Felleisen et al. invented
the CEK machine from an operational standpoint [16, 17, 19].
Hannan and Miller discovered the CLS machine from a proof-theoretical
standpoint [23].
Many people derived, invented, or
(re-)discovered Krivine's machine. Many others proposed modifications
of existing machines. And recently, Rose presented a
method to construct abstract machines from reduction rules [32],
while Hardin, Maranget, and Pagano presented a method to extract
the reduction strategy of a machine by extracting axioms from its
transitions and structural rules from its architecture [24].
In this article, we propose one constructive answer to all the questions
above.
We present a correspondence between functional
evaluators and abstract machines based on a two-way derivation
consisting of closure conversion, transformation into continuation-passing
style (CPS), and defunctionalization. This two-way derivation
lets us connect each of the machines above with an evaluator,
and makes it possible to echo variations in the evaluator into variations
in the abstract machine, and vice versa. The evaluator clarifies
the reduction strategy of the corresponding machine. The abstract
machine makes the evaluation steps explicit in a transition system.
8
Some machines operate on
-terms directly whereas others operate
on compiled
-terms expressed with an instruction set. Accordingly
, we distinguish between abstract machines and virtual machines
in the sense that virtual machines have an instruction set and
abstract machines do not; instead, abstract machines directly operate
on source terms and do not need a compiler from source terms to
instructions. (Gregoire and Leroy make the same point when they
talk about a compiled implementation of strong reduction [21].)
Prerequisites: ML, observational equivalence, abstract
machines,
-interpreters, CPS transformation, defunctionalization
, and closure conversion.
We use ML as a meta-language, and we assume a basic familiarity
with Standard ML and reasoning about ML programs. In particular
, given two pure ML expressions
e
and
e'
we write
e
e'
to
express that
e
and
e'
are observationally equivalent. Most of our
implementations of the abstract machines raise compiler warnings
about non-exhaustive matches. These are inherent to programming
abstract machines in an ML-like language. The warnings could be
avoided with an option type or with an explicit exception, at the
price of readability and direct relation to the usual mathematical
specifications of abstract machines.
It would be helpful to the reader to know at least one of the machines
considered in the rest of this article, be it Krivine's machine
, the CEK machine, the CLS machine, the SECD machine,
or the Categorical Abstract Machine.
It would also be helpful
to have already seen a
-interpreter written in a functional language
[20, 31, 35, 39]. In particular, we make use of Strachey's
notions of expressible values, i.e., the values obtained by evaluating
an expression, and denotable values, i.e., the values denoted by
identifiers [38].
We make use of the CPS transformation [12, 33]: a term is CPS-transformed
by naming all its intermediate results, sequentializing
their computation, and introducing continuations. Plotkin was the
first to establish the correctness of the CPS transformation [30].
We also make use of Reynolds's defunctionalization [31]: defunctionalizing
a program amounts to replacing each of its function
spaces by a data type and an apply function; the data type enumerates
all the function abstractions that may give rise to inhabitants
of this function space in this program [15]. Nielsen, Banerjee,
Heintze, and Riecke have established the correctness of defunctionalization
[3, 29].
A particular case of defunctionalization is closure conversion: in
an evaluator, closure conversion amounts to replacing each of the
function spaces in expressible and denotable values by a tuple, and
inlining the corresponding apply function.
We would like to stress that all the concepts used here are elementary
ones, and that the significance of this article is the one-fits-all
derivation between evaluators and abstract machines.
Overview:
The rest of this article is organized as follows. We first consider
a call-by-name and a call-by-value evaluator, and we present the
corresponding machines, which are Krivine's machine and the CEK
machine. We then turn to the CLS machine and the SECD machine,
and we present the corresponding evaluators. We finally consider
the Categorical Abstract Machine. For simplicity, we do not cover
laziness and sharing, but they come for free by threading a heap of
updateable thunks in a call-by-name evaluator [2].
Call-by-name, call-by-value, and the calculus
We first go from a call-by-name evaluator to Krivine's abstract machine
(Section 2.1) and then from the CEK machine to a call-by-value
evaluator (Section 2.2). Krivine's abstract machine operates
on de Bruijn-encoded
-terms, and the CEK machine operates on
-terms with names. Starting from the corresponding evaluators, it
is simple to construct a version of Krivine's abstract machine that
operates on
-terms with names, and a version of the CEK machine
that operates on de Bruijn-encoded
-terms (Section 2.3).
The derivation steps consist of closure conversion, transformation
into continuation-passing style, and defunctionalization of continuations
. Closure converting expressible and denotable values makes
the evaluator first order. CPS transforming the evaluator makes its
control flow manifest as a continuation. Defunctionalizing the continuation
materializes the control flow as a first-order data structure.
The result is a transition function, i.e., an abstract machine.
2.1
From a call-by-name evaluator to Krivine's
machine
Krivine's abstract machine [7] operates on de Bruijn-encoded
terms
. In this representation, identifiers are represented by their
lexical offset, as traditional since Algol 60 [40].
datatype term = IND of int
(* de Bruijn index *)
| ABS of term
| APP of term * term
Programs are closed terms.
2.1.1
A higher-order and compositional call-by-name
evaluator
Our starting point is the canonical call-by-name evaluator for the
-calculus [35, 37]. This evaluator is compositional in the sense of
denotational semantics [34, 37, 41] and higher order (
Eval0.eval
).
It is compositional because it solely defines the meaning of each
term as a composition of the meaning of its parts. It is higher order
because the data types
Eval0.denval
and
Eval0.expval
contain
functions: denotable values (
denval
) are thunks and expressible values
(
expval
) are functions. An environment is represented as a list
of denotable values. A program is evaluated in an empty environment
(
Eval0.main
).
structure Eval0
= struct
datatype denval = THUNK of unit -> expval
and expval = FUNCT of denval -> expval
(*
eval : term * denval list -> expval
*)
fun eval (IND n, e)
= let val (THUNK thunk) = List.nth (e, n)
in thunk ()
end
| eval (ABS t, e)
= FUNCT (fn v => eval (t, v :: e))
| eval (APP (t0, t1), e)
= let val (FUNCT f) = eval (t0, e)
in f (THUNK (fn () => eval (t1, e)))
end
(*
main : term -> expval
*)
fun main t
= eval (t, nil)
end
9
An identifier denotes a thunk. Evaluating an identifier amounts
to forcing this thunk.
Evaluating an abstraction yields a function
. Evaluating an application requires the evaluation of the sub-expression
in position of function; the intermediate result is a function
, which is applied to a thunk.
2.1.2
From higher-order functions to closures
We now closure-convert the evaluator of Section 2.1.1.
In
Eval0
, the function spaces in the data types of denotable
and expressible values are only inhabited by instances of the
abstractions
fn v => eval (t, v :: e)
in the meaning of abstractions
, and
fn () => eval (t1, e)
in the meaning of applications.
Each of these
-abstractions has two free variables: a term and an
environment. We defunctionalize these function spaces into closures
[15, 26, 31], and we inline the corresponding apply functions.
structure Eval1
= struct
datatype denval = THUNK of term * denval list
and expval = FUNCT of term * denval list
(*
eval : term * denval list -> expval
*)
fun eval (IND n, e)
= let val (THUNK (t, e')) = List.nth (e, n)
in eval (t, e')
end
| eval (ABS t, e)
= FUNCT (t, e)
| eval (APP (t0, t1), e)
= let val (FUNCT (t, e')) = eval (t0, e)
in eval (t, (THUNK (t1, e)) :: e')
end
(*
main : term -> expval
*)
fun main t
= eval (t, nil)
end
The definition of an abstraction is now
Eval1.FUNCT (t, e)
instead
of
fn v => Eval0.eval (t, v :: e)
, and its use is now
Eval1.eval
(t, (Eval1.THUNK (t1, e)) :: e')
instead of
f (Eval0.THUNK
(fn () => Eval0.eval (t1, e)))
. Similarly, the definition of a
thunk is now
Eval1.THUNK (t1, e)
instead of
Eval0.THUNK (fn ()
=> Eval0.eval (t1, e))
and its use is
Eval1.eval (t, e')
instead
of
thunk ()
.
The following proposition is a corollary of the correctness of defunctionalization
.
P
ROPOSITION
1
(
FULL CORRECTNESS
).
For any ML value
p : term
denoting a program, evaluating
Eval0.main p
yields a value
FUNCT f
and evaluating
Eval1.main p
yields a value
FUNCT (t, e)
such that
f
fn v => Eval1.eval (t, v :: e)
2.1.3
CPS transformation
We transform
Eval1.eval
into continuation-passing style.
1
Doing
so makes it tail recursive.
1
Since programs are closed, applying
List.nth
cannot fail and
therefore it denotes a total function. We thus keep it in direct
style [14].
structure Eval2
= struct
datatype denval = THUNK of term * denval list
and expval = FUNCT of term * denval list
(*
eval : term * denval list * (expval -> 'a)
*)
(*
-> 'a
*)
fun eval (IND n, e, k)
= let val (THUNK (t, e')) = List.nth (e, n)
in eval (t, e', k)
end
| eval (ABS t, e, k)
= k (FUNCT (t, e))
| eval (APP (t0, t1), e, k)
= eval (t0, e, fn (FUNCT (t, e'))
=> eval (t,
(THUNK (t1, e)) :: e',
k))
(*
main : term -> expval
*)
fun main t
= eval (t, nil, fn v => v)
end
The following proposition is a corollary of the correctness of the
CPS transformation. (Here observational equivalence reduces to
structural equality over ML values of type
expval
.)
P
ROPOSITION
2
(
FULL CORRECTNESS
).
For any ML value
p : term
denoting a program,
Eval1.main p
Eval2.main p
2.1.4
Defunctionalizing the continuations
The function space of the continuation is inhabited by instances of
two
-abstractions: the initial one in the definition of
Eval2.main
,
with no free variables, and one in the meaning of an application,
with three free variables. To defunctionalize the continuation, we
thus define a data type
cont
with two summands and the corresponding
apply cont
function to interpret these summands.
structure Eval3
= struct
datatype denval = THUNK of term * denval list
and expval = FUNCT of term * denval list
and
cont = CONT0
| CONT1 of term * denval list * cont
(*
eval : term * denval list * cont -> expval
*)
fun eval (IND n, e, k)
= let val (THUNK (t, e')) = List.nth (e, n)
in eval (t, e', k)
end
| eval (ABS t, e, k)
= apply_cont (k, FUNCT (t, e))
| eval (APP (t0, t1), e, k)
= eval (t0, e, CONT1 (t1, e, k))
and apply_cont (CONT0, v)
= v
| apply_cont (CONT1 (t1, e, k), FUNCT (t, e'))
= eval (t, (THUNK (t1, e)) :: e', k)
(*
main : term -> expval
*)
fun main t
= eval (t, nil, CONT0)
end
The following proposition is a corollary of the correctness of defunctionalization
. (Again, observational equivalence reduces here
to structural equality over ML values of type
expval
.)
10
P
ROPOSITION
3
(
FULL CORRECTNESS
).
For any ML value
p : term
denoting a program,
Eval2.main p
Eval3.main p
We identify that
cont
is a stack of thunks, and that the transitions
are those of Krivine's abstract machine.
2.1.5
Krivine's abstract machine
To obtain the canonical definition of Krivine's abstract machine, we
abandon the distinction between denotable and expressible values
and we use thunks instead, we represent the defunctionalized continuation
as a list of thunks instead of a data type, and we inline
apply cont
.
structure Eval4
= struct
datatype thunk = THUNK of term * thunk list
(*
eval : term * thunk list * thunk list
*)
(*
-> term * thunk list
*)
fun eval (IND n, e, s)
= let val (THUNK (t, e')) = List.nth (e, n)
in eval (t, e', s)
end
| eval (ABS t, e, nil)
= (ABS t, e)
| eval (ABS t, e, (t', e') :: s)
= eval (t, (THUNK (t', e')) :: e, s)
| eval (APP (t0, t1), e, s)
= eval (t0, e, (t1, e) :: s)
(*
main : term -> term * thunk list
*)
fun main t
= eval (t, nil, nil)
end
The following proposition is straightforward to prove.
P
ROPOSITION
4
(
FULL CORRECTNESS
).
For any ML value
p : term
denoting a program,
Eval3.main p
Eval4.main p
For comparison with
Eval4
, the canonical definition of Krivine's
abstract machine is as follows [7, 22, 25], where t denotes terms, v
denotes expressible values, e denotes environments, and s denotes
stacks of expressible values:
Source syntax:
t
::
n
t
t
0
t
1
Expressible values (closures):
v
::
t e
Initial transition, transition rules, and final transition:
t
t nil nil
n e s
t e
s
where t e
nth
e n
t e t
e
:: s
t t
e
:: e s
t
0
t
1
e s
t
0
e t
1
e :: s
t e nil
t e
Variables n are represented by their de Bruijn index, and the abstract
machine operates on triples consisting of a term, an environment,
and a stack of expressible values.
Each line in the canonical definition matches a clause in
Eval4
. We
conclude that Krivine's abstract machine can be seen as a defunctionalized
, CPS-transformed, and closure-converted version of the
standard call-by-name evaluator for the
-calculus. This evaluator
evidently implements Hardin, Maranget, and Pagano's K strategy
[24, Section 3].
2.2
From the CEK machine to a call-by-value
evaluator
The CEK machine [16, 17, 19] operates on
-terms with names and
distinguishes between values and computations in their syntax (i.e.,
it distinguishes trivial and serious terms, in Reynolds's words [31]).
datatype term = VALUE of value
| COMP of comp
and value = VAR of string
(* name *)
| LAM of string * term
and
comp = APP of term * term
Programs are closed terms.
2.2.1
The CEK abstract machine
Our starting point reads as follows [19, Figure 2, page 239], where
t denotes terms, w denotes values, v denotes expressible values, k
denotes evaluation contexts, and e denotes environments:
Source syntax:
t
::
w
t
0
t
1
w
::
x
x t
Expressible values (closures) and evaluation contexts:
v
::
x t e
k
::
stop
fun
v k
arg
t e k
Initial transition, transition rules (two kinds), and final transition
:
t
init
t mt
stop
w e k
eval
k
w e
t
0
t
1
e k
eval
t
0
e
arg
t
1
e k
arg
t
1
e k
v
cont
t
1
e
fun
v k
fun
x t e k
v
cont
t e x
v k
stop
v
final
v
where
x e
e
x
x t e
x t e
Variables x are represented by their name, and the abstract machine
consists of two mutually recursive transition functions. The first
transition function operates on triples consisting of a term, an environment
, and an evaluation context. The second operates on pairs
consisting of an evaluation context and an expressible value. Environments
are extended in the
fun
-transition, and consulted in
. The
empty environment is denoted by mt.
This specification is straightforward to program in ML:
11
signature ENV
= sig
type 'a env
val mt : 'a env
val lookup : 'a env * string -> 'a
val extend : string * 'a * 'a env -> 'a env
end
Environments are represented as a structure
Env : ENV
containing
a representation of the empty environment
mt
, an operation
lookup
to retrieve the value bound to a name in an environment, and an
operation
extend
to extend an environment with a binding.
structure Eval0
= struct
datatype expval
= CLOSURE of string * term * expval Env.env
datatype ev_context
= STOP
| ARG of term * expval Env.env * ev_context
| FUN of expval * ev_context
(*
eval : term * expval Env.env * ev_context
*)
(*
-> expval
*)
fun eval (VALUE v, e, k)
= continue (k, eval_value (v, e))
| eval (COMP (APP (t0, t1)), e, k)
= eval (t0, e, ARG (t1, e, k))
and eval_value (VAR x, e)
= Env.lookup (e, x)
| eval_value (LAM (x, t), e)
= CLOSURE (x, t, e)
and continue (STOP, w)
= w
| continue (ARG (t1, e, k), w)
= eval (t1, e, FUN (w, k))
| continue (FUN (CLOSURE (x, t, e), k), w)
= eval (t, Env.extend (x, w, e), k)
(*
main : term -> expval
*)
fun main t
= eval (t, Env.mt, STOP)
end
2.2.2
Refunctionalizing the evaluation contexts into
continuations
We identify that the data type
ev context
and the function
continue
are a defunctionalized representation. The corresponding higher-order
evaluator reads as follows.
As can be observed, it is in
continuation-passing style.
structure Eval1
= struct
datatype expval
= CLOSURE of string * term * expval Env.env
(*
eval : term * expval Env.env * (expval -> 'a)
*)
(*
-> 'a
*)
fun eval (VALUE v, e, k)
= k (eval_value (v, e))
| eval (COMP (APP (t0, t1)), e, k)
= eval (t0, e,
fn (CLOSURE (x, t, e'))
=> eval (t1, e,
fn w
=> eval (t, Env.extend (x, w, e'),
k)))
and eval_value (VAR x, e)
= Env.lookup (e, x)
| eval_value (LAM (x, t), e)
= CLOSURE (x, t, e)
(*
main : term -> expval
*)
fun main t
= eval (t, Env.mt, fn w => w)
end
The following proposition is a corollary of the correctness of defunctionalization
. (Observational equivalence reduces here to structural
equality over ML values of type
expval
.)
P
ROPOSITION
5
(
FULL CORRECTNESS
).
For any ML value
p : term
denoting a program,
Eval0.main p
Eval1.main p
2.2.3
Back to direct style
CPS-transforming the following direct-style evaluator yields the
evaluator of Section 2.2.2 [10].
structure Eval2
= struct
datatype expval
= CLOSURE of string * term * expval Env.env
(*
eval : term * expval Env.env -> expval
*)
fun eval (VALUE v, e)
= eval_value (v, e)
| eval (COMP (APP (t0, t1)), e)
= let val (CLOSURE (x, t, e')) = eval (t0, e)
val w = eval (t1, e)
in eval (t, Env.extend (x, w, e'))
end
and eval_value (VAR x, e)
= Env.lookup (e, x)
| eval_value (LAM (x, t), e)
= CLOSURE (x, t, e)
(*
main : term -> expval
*)
fun main t
= eval (t, Env.mt)
end
The following proposition is a corollary of the correctness of the
direct-style transformation. (Again, observational equivalence reduces
here to structural equality over ML values of type
expval
.)
P
ROPOSITION
6
(
FULL CORRECTNESS
).
For any ML value
p : term
denoting a program,
Eval1.main p
Eval2.main p
2.2.4
From closures to higher-order functions
We observe that the closures, in
Eval2
, are defunctionalized representations
with an apply function inlined. The corresponding
higher-order evaluator reads as follows.
structure Eval3
= struct
datatype expval = CLOSURE of expval -> expval
(*
eval : term * expval Env.env -> expval
*)
fun eval (VALUE v, e)
= eval_value (v, e)
12
| eval (COMP (APP (t0, t1)), e)
= let val (CLOSURE f) = eval (t0, e)
val w = eval (t1, e)
in f w
end
and eval_value (VAR x, e)
= Env.lookup (e, x)
| eval_value (LAM (x, t), e)
= CLOSURE (fn w
=> eval (t, Env.extend (x, w, e)))
(*
main : term -> expval
*)
fun main t
= eval (t, Env.mt)
end
The following proposition is a corollary of the correctness of defunctionalization
.
P
ROPOSITION
7
(
FULL CORRECTNESS
).
For any ML value
p : term
denoting a program, evaluating
Eval2.main p
yields a value
CLOSURE (x, t, e)
and evaluating
Eval3.main p
yields a value
CLOSURE f
such that
fn w => Eval2.eval (t, Env.extend (x, w, e))
f
2.2.5
A higher-order and compositional call-by-value
evaluator
The result in
Eval3
is a call-by-value evaluator that is compositional
and higher-order. This call-by-value evaluator is the canonical
one for the
-calculus [31, 35, 37]. We conclude that the CEK
machine can be seen as a defunctionalized, CPS-transformed, and
closure-converted version of the standard call-by-value evaluator
for
-terms.
2.3
Variants of Krivine's machine and of the
CEK machine
It is easy to construct a variant of Krivine's abstract machine for
terms
with names, by starting from a call-by-name evaluator for
-terms with names. Similarly, it is easy to construct a variant
of the CEK machine for
-terms with de Bruijn indices, by starting
from a call-by-value evaluator for
-terms with indices. It is
equally easy to start from a call-by-value evaluator for
-terms with
de Bruijn indices and no distinction between values and computations
; the resulting abstract machine coincides with Hankin's eager
machine [22, Section 8.1.2].
Abstract machines processing
-terms with de Bruijn indices often
resolve indices with transitions:
0 v :: e s
v :: s
n
1 v :: e s
n e s
Compared to the evaluator of Section 2.1.1, the evaluator corresponding
to this machine has
List.nth
inlined and is not compositional
:
fun eval (IND 0, denval :: e, s)
= ... denval ...
| eval (IND n, denval :: e, s)
= eval (IND (n - 1), e, s)
| ...
2.4
Conclusion
We have shown that Krivine's abstract machine and the CEK abstract
machine are counterparts of canonical evaluators for call-by-name
and for call-by-value
-terms, respectively. The derivation of
Krivine's machine is strikingly simpler than what can be found in
the literature. That the CEK machine can be derived is, to the best
of our knowledge, new. That these two machines are two sides of
the same coin is also new. We have not explored any other aspect
of this call-by-name/call-by-value duality [9].
Using substitutions instead of environments or inlining one of the
standard computational monads (state, continuations, etc. [39]) in
the call-by-value evaluator yields variants of the CEK machine that
have been documented in the literature [16, Chapter 8]. For example
, inlining the state monad in a monadic evaluator yields a
state-passing evaluator. The corresponding abstract machine has
one more component to represent the state. In general, inlining
monads provides a generic recipe to construct arbitrarily many new
abstract machines. It does not seem as straightforward, however, to
construct a "monadic abstract machine" and then to inline a monad;
we are currently studying the issue.
On another note, one can consider an evaluator for strictness-annotated
-terms--represented either with names or with indices,
and with or without distinction between values and computations.
One is then led to an abstract machine that generalizes Krivine's
machine and the CEK machine [13].
Finally, it is straightforward to extend Krivine's machine and the
CEK machine to bigger source languages (with literals, primitive
operations, conditional expressions, block structure, recursion,
etc.), by starting from evaluators for these bigger languages. For
example, all the abstract machines in "The essence of compiling
with continuations" [19] are defunctionalized continuation-passing
evaluators, i.e., interpreters.
In the rest of this article, we illustrate further the correspondence
between evaluators and abstract machines.
The CLS abstract machine
The CLS abstract machine is due to Hannan and Miller [23]. In the
following, t denotes terms, v denotes expressible values, c denotes
lists of directives (a term or the special tag
ap
), e denotes environments
, l denotes stacks of environments, and s denotes stacks of
expressible values.
Source syntax:
t
::
n
t
t
0
t
1
Expressible values (closures):
v
::
t e
Initial transition, transition rules, and final transition:
t
t :: nil nil :: nil nil
t :: c e :: l s
c l t e :: s
t
0
t
1
:: c e :: l s
t
0
:: t
1
::
ap
:: c e :: e :: l s
0 :: c
v :: e
:: l s
c l v :: s
n
1 :: c
v :: e
:: l s
n :: c e :: l s
ap
:: c l v :: t e :: s
t :: c
v :: e
:: l s
nil nil v :: s
v
13
Variables n are represented by their de Bruijn index, and the abstract
machine operates on triples consisting of a list of directives, a stack
of environments, and a stack of expressible values.
3.1
The CLS machine
Hannan and Miller's specification is straightforward to program in
ML:
datatype term = IND of int
(* de Bruijn index *)
| ABS of term
| APP of term * term
Programs are closed terms.
structure Eval0
= struct
datatype directive = TERM of term
| AP
datatype env = ENV of expval list
and expval = CLOSURE of term * env
(*
run : directive list * env list * expval list
*)
(*
-> expval
*)
fun run (nil, nil, v :: s)
= v
| run ((TERM (IND 0)) :: c, (ENV (v :: e)) :: l, s)
= run (c, l, v :: s)
| run ((TERM (IND n)) :: c, (ENV (v :: e)) :: l, s)
= run ((TERM (IND (n - 1))) :: c,
(ENV e) :: l,
s)
| run ((TERM (ABS t)) :: c, e :: l, s)
= run (c, l, (CLOSURE (t, e)) :: s)
| run ((TERM (APP (t0, t1))) :: c, e :: l, s)
= run ((TERM t0) :: (TERM t1) :: AP :: c,
e :: e :: l,
s)
| run (AP :: c, l, v :: (CLOSURE (t, ENV e)) :: s)
= run ((TERM t) :: c, (ENV (v :: e)) :: l, s)
(*
main : term -> expval *)
fun main t
= run ((TERM t) :: nil, (ENV nil) :: nil, nil)
end
3.2
A disentangled definition of the CLS machine
In the definition of Section 3.1, all the possible transitions are
meshed together in one recursive function,
run
. Instead, let us factor
run
into several mutually recursive functions, each of them with
one induction variable.
In this disentangled definition,
run c
interprets the list of control directives, i.e., it specifies
which transition to take if the list is empty, starts with a term,
or starts with an apply directive. If the list is empty, the computation
terminates. If the list starts with a term,
run t
is
called, caching the term in the first parameter. If the list starts
with an apply directive,
run a
is called.
run t
interprets the top term in the list of control directives.
run a
interprets the top value in the current stack.
The disentangled definition reads as follows:
structure Eval1
= struct
datatype directive = TERM of term
| AP
datatype env = ENV of expval list
and expval = CLOSURE of term * env
(*
run_c : directive list * env list * expval list
*)
(*
-> expval
*)
fun run_c (nil, nil, v :: s)
= v
| run_c ((TERM t) :: c, l, s)
= run_t (t, c, l, s)
| run_c (AP :: c, l, s)
= run_a (c, l, s)
and run_t (IND 0, c, (ENV (v :: e)) :: l, s)
= run_c (c, l, v :: s)
| run_t (IND n, c, (ENV (v :: e)) :: l, s)
= run_t (IND (n - 1), c, (ENV e) :: l, s)
| run_t (ABS t, c, e :: l, s)
= run_c (c, l, (CLOSURE (t, e)) :: s)
| run_t (APP (t0, t1), c, e :: l, s)
= run_t (t0,
(TERM t1) :: AP :: c,
e :: e :: l,
s)
and run_a (c, l, v :: (CLOSURE (t, ENV e)) :: s)
= run_t (t, c, (ENV (v :: e)) :: l, s)
(*
main : term -> expval
*)
fun main t
= run_t (t, nil, (ENV nil) :: nil, nil)
end
P
ROPOSITION
8
(
FULL CORRECTNESS
).
For any ML value
p : term
denoting a program,
Eval0.main p
Eval1.main p
P
ROOF
. By fold-unfold [5]. The invariants are as follows. For any
ML values
t : term
,
e : expval list
, and
s : expval list
,
Eval1.run c (c, l, s)
Eval0.run (c, l, s)
Eval1.run t (t, c, l, s)
Eval0.run ((TERM t) :: c, l, s)
Eval1.run a (c, l, s)
Eval0.run (AP :: c, l, s)
3.3
The evaluator corresponding to the CLS
machine
In the disentangled definition of Section 3.2, there are three possible
ways to construct a list of control directives (nil, cons'ing a term,
and cons'ing an apply directive). We could specify these constructions
as a data type rather than as a list. Such a data type, together
with
run c
, is in the image of defunctionalization (
run c
is the apply
functions of the data type). The corresponding higher-order evaluator
is in continuation-passing style. Transforming it back to direct
style yields the following evaluator:
structure Eval3
= struct
datatype env = ENV of expval list
and expval = CLOSURE of term * env
(*
run_t : term * env list * expval list
*)
(*
-> env list * expval list
*)
fun run_t (IND 0, (ENV (v :: e)) :: l, s)
= (l, v :: s)
| run_t (IND n, (ENV (v :: e)) :: l, s)
= run_t (IND (n - 1), (ENV e) :: l, s)
| run_t (ABS t, e :: l, s)
= (l, (CLOSURE (t, e)) :: s)
14
| run_t (APP (t0, t1), e :: l, s)
= let val (l, s) = run_t (t0, e :: e :: l, s)
val (l, s) = run_t (t1, l, s)
in run_a (l, s)
end
and run_a (l, v :: (CLOSURE (t, ENV e)) :: s)
= run_t (t, (ENV (v :: e)) :: l, s)
(*
main : term -> expval *)
fun main t
= let val (nil, v :: s)
= run_t (t, (ENV nil) :: nil, nil)
in v
end
end
The following proposition is a corollary of the correctness of defunctionalization
and of the CPS transformation. (Here observational
equivalence reduces to structural equality over ML values of
type
expval
.)
P
ROPOSITION
9
(
FULL CORRECTNESS
).
For any ML value
p : term
denoting a program,
Eval1.main p
Eval3.main p
As in Section 2, this evaluator can be made compositional by refunctionalizing
the closures into higher-order functions and by factoring
the resolution of de Bruijn indices into an auxiliary lookup
function.
We conclude that the evaluation model embodied in the CLS machine
is a call-by-value interpreter threading a stack of environments
and a stack of intermediate results with a caller-save strategy
(witness the duplication of environments on the stack in the meaning
of applications) and with a left-to-right evaluation of sub-terms.
In particular, the meaning of a term is a partial endofunction over a
stack of environments and a stack of intermediate results.
The SECD abstract machine
The SECD abstract machine is due to Landin [26]. In the following
, t denotes terms, v denotes expressible values, c denotes lists of
directives (a term or the special tag
ap
), e denotes environments, s
denotes stacks of expressible values, and d denotes dumps (list of
triples consisting of a stack, an environment and a list of directives).
Source syntax:
t
::
x
x t
t
0
t
1
Expressible values (closures):
v
::
x t e
Initial transition, transition rules, and final transition:
t
nil mt t :: nil nil
s e x :: c d
e
x
:: s e c d
s e
x t
:: c d
x t e :: s e c d
s e
t
0
t
1
:: c d
s e t
1
:: t
0
::
ap
:: c d
x t e
:: v :: s e
ap
:: c d
nil e
x
v t :: nil d
where d
s e c
:: d
v :: s e nil
s
e
d
:: d
v :: s
e
c
d
v :: s e nil nil
v
Variables x are represented by their name, and the abstract machine
operates on quadruples consisting of a stack of expressible values,
an environment, a list of directives, and a dump. Environments are
consulted in the first transition rule, and extended in the fourth. The
empty environment is denoted by mt.
4.1
The SECD machine
Landin's specification is straightforward to program in ML. Programs
are closed terms. Environments are as in Section 2.2.
datatype term = VAR of string
(* name *)
| LAM of string * term
| APP of term * term
structure Eval0
= struct
datatype directive = TERM of term
| AP
datatype value
= CLOSURE of string * term * value Env.env
fun run (v :: nil, e', nil, nil)
= v
| run (s, e, (TERM (VAR x)) :: c, d)
= run ((Env.lookup (e, x)) :: s, e, c, d)
| run (s, e, (TERM (LAM (x, t))) :: c, d)
= run ((CLOSURE (x, t, e)) :: s, e, c, d)
| run (s, e, (TERM (APP (t0, t1))) :: c, d)
= run (s,
e,
(TERM t1) :: (TERM t0) :: AP :: c,
d)
| run ((CLOSURE (x, t, e')) :: v :: s,
e,
AP :: c,
d)
= run (nil,
Env.extend (x, v, e'),
(TERM t) :: nil,
(s, e, c) :: d)
| run (v :: nil, e', nil, (s, e, c) :: d)
= run (v :: s, e, c, d)
(*
main : term -> value
*)
fun main t
= run (nil, Env.mt, (TERM t) :: nil, nil)
end
4.2
A disentangled definition of the SECD machine
As in the CLS machine, in the definition of Section 4.1, all the possible
transitions are meshed together in one recursive function,
run
.
Instead, we can factor
run
into several mutually recursive functions,
each of them with one induction variable. These mutually recursive
functions are in defunctionalized form: the one processing the
dump is an apply function for the data type representing the dump
(a list of stacks, environments, and lists of directives), and the one
processing the control is an apply function for the data type representing
the control (a list of directives). The corresponding higher-order
evaluator is in continuation-passing style with two nested continuations
and one control delimiter,
reset
[12, 18]. The delimiter
resets the control continuation when evaluating the body of a
abstraction
. (More detail is available in a technical report [11].)
15
4.3
The evaluator corresponding to the SECD
machine
The direct-style version of the evaluator from Section 4.2 reads as
follows:
structure Eval4
= struct
datatype value
= CLOSURE of string * term * value Env.env
(*
eval : term * value list * value Env.env
*)
(*
-> value list * value Env.env
*)
fun eval (VAR x, s, e)
= ((Env.lookup (x, e)) :: s, e)
| eval (LAM (x, t), s, e)
= ((CLOSURE (x, t, e)) :: s, e)
| eval (APP (t0, t1), s, e)
= let val (s, e) = eval (t1, s, e)
val (s, e) = eval (t0, s, e)
in apply (s, e)
end
and apply ((CLOSURE (x, t, e')) :: v :: s, e)
= let val (v :: nil, _)
= reset (fn ()
=> eval (t,
nil,
Env.extend (x,
v,
e')))
in (v :: s, e)
end
(*
main : term -> value
*)
fun main t
= let val (v :: nil, _)
= reset (fn ()
=> eval (t, nil, Env.mt))
in v
end
end
The following proposition is a corollary of the correctness of defunctionalization
and of the CPS transformation. (Here observational
equivalence reduces to structural equality over ML values of
type
value
.)
P
ROPOSITION
10
(
FULL CORRECTNESS
).
For any ML value
p : term
denoting a program,
Eval0.main p
Eval4.main p
As in Sections 2 and 3, this evaluator can be made compositional
by refunctionalizing the closures into higher-order functions.
We conclude that the evaluation model embodied in the SECD machine
is a call-by-value interpreter threading a stack of intermediate
results and an environment with a callee-save strategy (witness the
dynamic passage of environments in the meaning of applications),
a right-to-left evaluation of sub-terms, and a control delimiter. In
particular, the meaning of a term is a partial endofunction over a
stack of intermediate results and an environment. Furthermore, this
evaluator evidently implements Hardin, Maranget, and Pagano's L
strategy, i.e., right-to-left call by value, without us having to "guess"
its inference rules [24, Section 4].
The denotational content of the SECD machine puts a new light
on it. For example, its separation between a control register and a
dump register is explained by the control delimiter in the evaluator
(
reset
in
Eval4.eval
).
2
Removing this control delimiter gives rise
to an abstract machine with a single stack component for control-not
by a clever change in the machine itself, but by a straightforward
simplification in the corresponding evaluator.
Variants of the CLS machine and of the SECD machine
It is straightforward to construct a variant of the CLS machine for
-terms with names, by starting from an evaluator for
-term with
names. Similarly, it is straightforward to construct a variant of the
SECD machine for
-terms with de Bruijn indices, by starting from
an evaluator for
-term with indices. In the same vein, it is simple
to construct call-by-name versions of the CLS machine and of the
SECD machine, by starting from call-by-name evaluators. It is also
simple to construct a properly tail recursive version of the SECD
machine, and to extend the CLS machine and the SECD machine to
bigger source languages, by extending the corresponding evaluator.
The Categorical Abstract Machine
What is the difference between an abstract machine and a virtual
machine? Elsewhere [1], we propose to distinguish them based on
the notion of instruction set: A virtual machine has an instruction
set whereas an abstract machine does not. Therefore, an abstract
machine directly operates on a
-term, but a virtual machine operates
on a compiled representation of a
-term, expressed using
an instruction set. (This distinction can be found elsewhere in the
literature [21].)
The Categorical Abstract Machine [6], for example, has an instruction
set--categorical combinators--and therefore (despite its
name) it is a virtual machine, not an abstract machine. In contrast
, Krivine's machine, the CEK machine, the CLS machine, and
the SECD machine are all abstract machines, not virtual machines,
since they directly operate on
-terms. In this section, we present
the abstract machine corresponding to the Categorical Abstract Machine
(CAM). We start from the evaluation model embodied in the
CAM [1].
6.1
The evaluator corresponding to the CAM
The evaluation model embodied in the CAM is an interpreter
threading a stack with its top element cached in a register, representing
environments as expressible values (namely nested pairs linked
as lists), with a caller-save strategy (witness the duplication of the
register on the stack in the meaning of applications below), and with
a left-to-right evaluation of sub-terms. In particular, the meaning of
a term is a partial endofunction over the register and the stack. This
evaluator reads as follows:
datatype term = IND of int (* de Bruijn index *)
| ABS of term
| APP of term * term
| NIL
| CONS of term * term
| CAR of term
| CDR of term
Programs are closed terms.
2
A rough definition of
reset
is
fun reset t = t ()
.
A more accurate definition, however, falls out of the scope of this
article [12, 18].
16
structure Eval0
= struct
datatype expval
= NULL
| PAIR of expval * expval
| CLOSURE of expval * (expval * expval list
-> expval * expval list)
(*
access : int * expval * expval list
*)
(*
-> expval * expval list
*)
fun access (0, PAIR (v1, v2), s)
= (v2, s)
| access (n, PAIR (v1, v2), s)
= access (n - 1, v1, s)
(*
eval : term * expval * expval list
*)
(*
-> expval * expval list
*)
fun eval (IND n, v, s)
= access (n, v, s)
| eval (ABS t, v, s)
= (CLOSURE (v, fn (v, s) => eval (t, v, s)), s)
| eval (APP (t0, t1), v, s)
= let val (v, v' :: s)
= eval (t0, v, v :: s)
val (v', (CLOSURE (v, f)) :: s)
= eval (t1, v', v :: s)
in f (PAIR (v, v'), s)
end
| eval (NIL, v, s)
= (NULL, s)
| eval (CONS (t1, t2), v, s)
= let val (v, v' :: s) = eval (t1, v, v :: s)
val (v, v' :: s) = eval (t2, v', v :: s)
in (PAIR (v', v), s)
end
| eval (CAR t, v, s)
= let val (PAIR (v1, v2), s) = eval (t, v, s)
in (v1, s)
end
| eval (CDR t, v, s)
= let val (PAIR (v1, v2), s) = eval (t, v, s)
in (v2, s)
end
(*
main : term -> expval
*)
fun main t
= let val (v, nil) = eval (t, NULL, nil)
in v
end
end
This evaluator evidently implements Hardin, Maranget, and
Pagano's X strategy [24, Section 6].
6.2
The abstract machine corresponding to
the CAM
As in Sections 2, 3, and 4, we can closure-convert the evaluator of
Section 6.1 by defunctionalizing its expressible values, transform
it into continuation-passing style, and defunctionalize its continuations
. The resulting abstract machine reads as follows, where t
denotes terms, v denotes expressible values, k denotes evaluation
contexts, and s denotes stacks of expressible values.
Source syntax:
t
::
n
t
t
0
t
1
nil
cons
t
1
t
2
car
t
cdr
t
Expressible values (unit value, pairs, and closures) and evaluation
contexts:
v
::
null
v
1
v
2
v t
k
::
CONT0
CONT1
t k
CONT2
k
CONT3
t k
CONT4
k
CONT5
k
CONT6
k
Initial transition, transition rules (two kinds), and final transition
:
t
init
t
null
nil
CONT0
n v s k
eval
k
n v
s
t v s k
eval
k v t s
nil
v s k
eval
k
null
s
t
0
t
1
v s k
eval
t
0
v v :: s
CONT1
t
1
k
cons
t
1
t
2
v s k
eval
t
1
v v :: s
CONT3
t
2
k
car
t
v s k
eval
t v s
CONT5
k
cdr
t
v s k
eval
t v s
CONT6
k
CONT1
t k
v v
:: s
cont
t v
v :: s
CONT2
k
CONT2
k
v
v t :: s
cont
t
v v
s k
CONT3
t
1
k
v v
:: s
cont
t
1
v
v :: s
CONT4
k
CONT4
k
v v
:: s
cont
k
v
v
s
CONT5
k
v
1
v
2
s
cont
k v
1
s
CONT6
k
v
1
v
2
s
cont
k v
2
s
CONT0
v nil
final
v
where
0
v
1
v
2
v
2
n
v
1
v
2
n
1 v
1
Variables n are represented by their de Bruijn index, and the abstract
machine consists of two mutually recursive transition functions
. The first transition function operates on quadruples consisting
of a term, an expressible value, a stack of expressible values,
and an evaluation context. The second transition function operates
on triples consisting of an evaluation context, an expressible value,
and a stack of expressible values.
This abstract machine embodies the evaluation model of the CAM.
Naturally, more intuitive names could be chosen instead of
CONT0
,
CONT1
, etc.
Conclusion and issues
We have presented a constructive correspondence between functional
evaluators and abstract machines.
This correspondence
builds on off-the-shelf program transformations: closure conversion
, CPS transformation, defunctionalization, and inlining.
3
We
have shown how to reconstruct known machines (Krivine's machine
, the CEK machine, the CLS machine, and the SECD machine
) and how to construct new ones. Conversely, we have revealed
the denotational content of known abstract machines. We
have shown that Krivine's abstract machine and the CEK machine
correspond to canonical evaluators for the
-calculus. We have also
shown that they are dual of each other since they correspond to
call-by-name and call-by-value evaluators in the same direct style.
In terms of denotational semantics [27, 34], Krivine's machine and
the CEK machine correspond to a standard semantics, whereas the
CLS machine and the SECD machine correspond to a stack semantics
of the
-calculus. Finally, we have exhibited the abstract machine
corresponding to the CAM, which puts the reader in a new
position to answer the recurrent question as to whether the CLS
machine is closer to the CAM or to the SECD machine.
3
Indeed the push-enter twist of Krivine's machine is obtained
by inlining
apply cont
in Section 2.1.5.
17
Since this article was written, we have studied the correspondence
between functional evaluators and abstract machines for call by
need [2] and for Propositional Prolog [4]. In both cases, we derived
sensible machines out of canonical evaluators.
It seems to us that this correspondence between functional evaluators
and abstract machines builds a reliable bridge between denotational
definitions and definitions of abstract machines. On the one
hand, it allows one to identify the denotational content of an abstract
machine in the form of a functional interpreter. On the other
hand, it gives one a precise and generic recipe to construct arbitrarily
many new variants of abstract machines (e.g., with substitutions
or environments, or with stacks) or of arbitrarily many new abstract
machines, starting from an evaluator with any given computational
monad [28].
Acknowledgments:
We are grateful to Malgorzata Biernacka, Julia Lawall, and Henning
Korsholm Rohde for timely comments. Thanks are also due to
the anonymous reviewers.
This work is supported by the ESPRIT Working Group APPSEM II
(
http://www.appsem.org
).
References
[1] Mads Sig Ager, Dariusz Biernacki, Olivier Danvy, and Jan
Midtgaard.
From interpreter to compiler and virtual machine
: a functional derivation. Technical Report BRICS RS-03
-14, DAIMI, Department of Computer Science, University
of Aarhus, Aarhus, Denmark, March 2003.
[2] Mads Sig Ager, Olivier Danvy, and Jan Midtgaard. A functional
correspondence between call-by-need evaluators and
lazy abstract machines.
Technical Report BRICS RS-03-24
, DAIMI, Department of Computer Science, University of
Aarhus, Aarhus, Denmark, June 2003.
[3] Anindya Banerjee, Nevin Heintze, and Jon G. Riecke. Design
and correctness of program transformations based on control-flow
analysis. In Naoki Kobayashi and Benjamin C. Pierce,
editors, Theoretical Aspects of Computer Software, 4th International
Symposium, TACS 2001, number 2215 in Lecture
Notes in Computer Science, Sendai, Japan, October 2001.
Springer-Verlag.
[4] Dariusz Biernacki and Olivier Danvy.
From interpreter to
logic engine: A functional derivation.
Technical Report
BRICS RS-03-25, DAIMI, Department of Computer Science,
University of Aarhus, Aarhus, Denmark, June 2003. Accepted
for presentation at LOPSTR 2003.
[5] Rod M. Burstall and John Darlington. A transformational
system for developing recursive programs. Journal of ACM,
24(1):4467, 1977.
[6] Guy Cousineau, Pierre-Louis Curien, and Michel Mauny. The
categorical abstract machine. Science of Computer Programming
, 8(2):173202, 1987.
[7] Pierre Cregut. An abstract machine for lambda-terms normalization
.
In Mitchell Wand, editor, Proceedings of the
1990 ACM Conference on Lisp and Functional Programming,
pages 333340, Nice, France, June 1990. ACM Press.
[8] Pierre-Louis Curien. Categorical Combinators, Sequential
Algorithms and Functional Programming. Progress in Theoretical
Computer Science. Birkhauser, 1993.
[9] Pierre-Louis Curien and Hugo Herbelin. The duality of computation
. In Philip Wadler, editor, Proceedings of the 2000
ACM SIGPLAN International Conference on Functional Programming
, SIGPLAN Notices, Vol. 35, No. 9, pages 233
243, Montreal, Canada, September 2000. ACM Press.
[10] Olivier Danvy. Back to direct style. Science of Computer
Programming, 22(3):183195, 1994.
[11] Olivier Danvy. A lambda-revelation of the SECD machine.
Technical Report BRICS RS-02-53, DAIMI, Department of
Computer Science, University of Aarhus, Aarhus, Denmark,
December 2002.
[12] Olivier Danvy and Andrzej Filinski. Representing control, a
study of the CPS transformation. Mathematical Structures in
Computer Science, 2(4):361391, 1992.
[13] Olivier Danvy and John Hatcliff. CPS transformation after
strictness analysis. ACM Letters on Programming Languages
and Systems, 1(3):195212, 1993.
[14] Olivier Danvy and John Hatcliff. On the transformation between
direct and continuation semantics. In Stephen Brookes,
Michael Main, Austin Melton, Michael Mislove, and David
Schmidt, editors, Proceedings of the 9th Conference on Mathematical
Foundations of Programming Semantics, number
802 in Lecture Notes in Computer Science, pages 627648,
New Orleans, Louisiana, April 1993. Springer-Verlag.
[15] Olivier Danvy and Lasse R. Nielsen.
Defunctionalization
at work. In Harald Sndergaard, editor, Proceedings of the
Third International ACM SIGPLAN Conference on Principles
and Practice of Declarative Programming (PPDP'01), pages
162174, Firenze, Italy, September 2001. ACM Press. Extended
version available as the technical report BRICS RS-01
-23.
[16] Matthias Felleisen and Matthew Flatt.
Programming languages
and lambda calculi.
Unpublished lecture notes.
http://www.ccs.neu.edu/home/matthias/3810-w02/
readings.html
, 1989-2003.
[17] Matthias Felleisen and Daniel P. Friedman. Control operators,
the SECD machine, and the
-calculus. In Martin Wirsing, editor
, Formal Description of Programming Concepts III, pages
193217. Elsevier Science Publishers B.V. (North-Holland),
Amsterdam, 1986.
[18] Andrzej Filinski. Representing monads. In Hans-J. Boehm,
editor, Proceedings of the Twenty-First Annual ACM Symposium
on Principles of Programming Languages, pages 446
457, Portland, Oregon, January 1994. ACM Press.
[19] Cormac Flanagan, Amr Sabry, Bruce F. Duba, and Matthias
Felleisen. The essence of compiling with continuations. In
David W. Wall, editor, Proceedings of the ACM SIGPLAN'93
Conference on Programming Languages Design and Imple-mentation
, SIGPLAN Notices, Vol. 28, No 6, pages 237247,
Albuquerque, New Mexico, June 1993. ACM Press.
[20] Daniel P. Friedman, Mitchell Wand, and Christopher T.
Haynes. Essentials of Programming Languages, second edition
. The MIT Press, 2001.
[21] Benjamin Gregoire and Xavier Leroy. A compiled implementation
of strong reduction. In Simon Peyton Jones, editor
, Proceedings of the 2002 ACM SIGPLAN International
Conference on Functional Programming, SIGPLAN Notices,
Vol. 37, No. 9, pages 235246, Pittsburgh, Pennsylvania,
September 2002. ACM Press.
18
[22] Chris Hankin. Lambda Calculi, a guide for computer scientists
, volume 1 of Graduate Texts in Computer Science. Oxford
University Press, 1994.
[23] John Hannan and Dale Miller. From operational semantics
to abstract machines. Mathematical Structures in Computer
Science, 2(4):415459, 1992.
[24] Ther`ese Hardin, Luc Maranget, and Bruno Pagano. Functional
runtime systems within the lambda-sigma calculus.
Journal of Functional Programming, 8(2):131172, 1998.
[25] Jean-Louis Krivine.
Un interpr`ete du
-calcul.
Brouil-lon
. Available online at
http://www.logique.jussieu.fr/
~krivine
, 1985.
[26] Peter J. Landin. The mechanical evaluation of expressions.
The Computer Journal, 6(4):308320, 1964.
[27] Robert E. Milne and Christopher Strachey. A Theory of Programming
Language Semantics. Chapman and Hall, London,
and John Wiley, New York, 1976.
[28] Eugenio Moggi. Notions of computation and monads. Information
and Computation, 93:5592, 1991.
[29] Lasse R. Nielsen. A denotational investigation of defunctionalization
. Technical Report BRICS RS-00-47, DAIMI, Department
of Computer Science, University of Aarhus, Aarhus,
Denmark, December 2000.
[30] Gordon D. Plotkin. Call-by-name, call-by-value and the
calculus
. Theoretical Computer Science, 1:125159, 1975.
[31] John C. Reynolds. Definitional interpreters for higher-order
programming languages. Higher-Order and Symbolic Computation
, 11(4):363397, 1998. Reprinted from the proceedings
of the 25th ACM National Conference (1972).
[32] Kristoffer H. Rose.
Explicit substitution tutorial & survey
. BRICS Lecture Series LS-96-3, DAIMI, Department of
Computer Science, University of Aarhus, Aarhus, Denmark,
September 1996.
[33] Amr Sabry and Matthias Felleisen.
Reasoning about programs
in continuation-passing style. Lisp and Symbolic Computation
, 6(3/4):289360, 1993.
[34] David A. Schmidt. Denotational Semantics: A Methodology
for Language Development. Allyn and Bacon, Inc., 1986.
[35] Guy L. Steele Jr. and Gerald J. Sussman.
The art of
the interpreter or, the modularity complex (parts zero, one,
and two).
AI Memo 453, Artificial Intelligence Laboratory
, Massachusetts Institute of Technology, Cambridge, Massachusetts
, May 1978.
[36] Joseph Stoy.
Some mathematical aspects of functional
programming.
In John Darlington, Peter Henderson, and
David A. Turner, editors, Functional Programming and its
Applications. Cambridge University Press, 1982.
[37] Joseph E. Stoy. Denotational Semantics: The Scott-Strachey
Approach to Programming Language Theory. The MIT Press,
1977.
[38] Christopher Strachey.
Fundamental concepts in programming
languages. Higher-Order and Symbolic Computation,
13(1/2):149, 2000.
[39] Philip Wadler. The essence of functional programming (in-vited
talk). In Andrew W. Appel, editor, Proceedings of the
Nineteenth Annual ACM Symposium on Principles of Programming
Languages, pages 114, Albuquerque, New Mexico
, January 1992. ACM Press.
[40] Mitchell Wand. A short proof of the lexical addressing algorithm
. Information Processing Letters, 35:15, 1990.
[41] Glynn Winskel. The Formal Semantics of Programming Languages
. Foundation of Computing Series. The MIT Press,
1993.
19 | call-by-name;Interpreters;closure conversion;evaluator;defunctionalization;call-by-value;transformation into continuation-passing style (CPS);abstract machines;abstract machine |
110 | Indexing Multi-Dimensional Time-Series with Support for Multiple Distance Measures | Although most time-series data mining research has concentrated on providing solutions for a single distance function, in this work we motivate the need for a single index structure that can support multiple distance measures. Our specific area of interest is the efficient retrieval and analysis of trajectory similarities. Trajectory datasets are very common in environmental applications, mobility experiments, video surveillance and are especially important for the discovery of certain biological patterns. Our primary similarity measure is based on the Longest Common Subsequence (LCSS) model, that offers enhanced robustness, particularly for noisy data, which are encountered very often in real world applications . However, our index is able to accommodate other distance measures as well, including the ubiquitous Euclidean distance, and the increasingly popular Dynamic Time Warping (DTW). While other researchers have advocated one or other of these similarity measures, a major contribution of our work is the ability to support all these measures without the need to restructure the index. Our framework guarantees no false dismissals and can also be tailored to provide much faster response time at the expense of slightly reduced precision/recall. The experimental results demonstrate that our index can help speed-up the computation of expensive similarity measures such as the LCSS and the DTW. | INTRODUCTION
In this work we present an efficient and compact, external
memory index for fast detection of similar trajectories. Trajectory
data are prevalent in diverse fields of interest such
as meteorology, GPS tracking, wireless applications, video
tracking [5] and motion capture [18]. Recent advances in
mobile computing, sensor and GPS technology have made it
possible to collect large amounts of spatiotemporal data and
The research of this author was supported by NSF ITR 0220148, NSF
CAREER 9907477, NSF IIS 9984729, and NRDRP
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee. SIGKDD '03, August 24-27, 2003, Washington,
DC, USA.
Copyright 2003 ACM 1-58113-737-0/03/0008...
$
5.00.
there is increasing interest in performing data analysis tasks
over such data [17]. In mobile computing, users equipped
with mobile devices move in space and register their location
at different time instances to spatiotemporal databases
via wireless links. In environmental information systems,
tracking animals and weather conditions is very common
and large datasets can be created by storing locations of observed
objects over time. Human motion data generated by
tracking simultaneously various body joints are also multidimensional
trajectories. In this field of computer graphics
fundamental operations include the clustering of similar
movements, leading to a multitude of applications such as interactive
generation of motions [2]. Spatiotemporal data are
also produced by migrating particles in biological sciences,
where the focus can be on the discovery of subtle patterns
during cellular mitoses [19]. In general, any dataset that
involves storage of multiple streams (attributes) of data can
be considered and treated as a multidimensional trajectory.
One very common task for such data is the discovery of
objects that follow a certain motion pattern, for purposes
of clustering or classification. The objective here is to efficiently
organize trajectories on disk, so that we can quickly
answer k-Nearest-Neighbors (kNN) queries. A frequent obstacle
in the analysis of spatiotemporal data, is the presence
of noise, which can be introduced due to electromagnetic
anomalies, transceiver problems etc. Another impediment
is that objects may move in a similar way, but at different
speeds. So, we would like our similarity model to be robust
to noise, support elastic and imprecise matches.
Choosing the Euclidean distance as the similarity model
is unrealistic, since its performance degrades rapidly in the
presence of noise and this measure is also sensitive to small
variations in the time axis. We concentrate on two similarity
models: the first is an extension of Dynamic Time
Warping for higher dimensions. We note that DTW has
been used so far for one-dimensional time series. Here we
present a formulation for sequences of arbitrary dimensions.
The second distance measure is a modification of the Longest
Common Subsequence (LCSS), specially adapted for continuous
values. Both measures represent a significant improvement
compared to the Euclidean distance. However, LCSS
is more robust than DTW under noisy conditions [20] as figure
1 shows. Euclidean matching completely disregards the
variations in the time axis, while DTW performs excessive
matchings, therefore distorting the true distance between sequences
. The LCSS produces the most robust and intuitive
correspondence between points.
By incorporating warping in time as a requirement to
216
0
20
40
60
80
100
120
Euclidean Matching
0
20
40
60
80
100
120
Time Warping
0
20
40
60
80
100
120
Longest Common Subsequence
Figure 1: A lucid example about the quality matching of the LCSS compared to other distance functions.
The Euclidean distance performs an inflexible matching, while the DTW gives many superfluous and spurious
matchings, in the presence of noise.
our model, our algorithms are automatically challenged with
quadratic execution time. Moreover, these flexible functions
are typically non-metric, which makes difficult the design of
indexing structures. To speed up the execution of a similarity
function, one can devise a low cost, upper bounding function
(since the LCSS model captures the similarity, which
is inversely analogous to the distance). We utilize a fast
prefiltering scheme that will return upper bound estimates
for the LCSS similarity between the query and the indexed
trajectories. In addition to providing similarity measures
that guarantee no false dismissals, we also propose approximate
similarity estimates that significantly reduce the index
response time. Finally, we show that the same index can
support other distance measures as well.
Our technique works by splitting the trajectories in multidimensional
MBRs and storing them in an R-tree. For a
given query, we construct a Minimum Bounding Envelope
(MBE) that covers all the possible matching areas of the
query under warping conditions. This MBE is decomposed
into MBRs and then probed in the R-tree index. Using the
index we can discover which trajectories could potentially
be similar to the query. The index size is compact and its
construction time scales well with the trajectory length and
the database size, therefore our method can be utilized for
massive datamining tasks.
The main contributions of the paper are:
We present the first external memory index for multidimensional
trajectories, that supports multiple distance
functions (such as LCSS, DTW and Euclidean), without the
need to rebuild the index.
We give efficient techniques for upper(lower) bounding
and for approximating the LCSS(DTW) for a set of trajectories
. We incorporate these techniques in the design of an
efficient indexing structure for the LCSS and the DTW.
We provide a flexible method that allows the user to
specify queries of variable warping length, and the technique
can be tuned to optimize the retrieval time or the accuracy
of the solution.
RELATED WORK
There has been a wealth of papers that use an L
p
distance
family function to perform similarity matching for
1D time-series. Work on multidimensional sequences can
be found in [14, 9]. However, they support only Euclidean
distance, which, as mentioned in the introduction, cannot
capture flexible similarities.
Although the vast majority of database/data mining research
on time series data mining has focused on Euclidean
distance, virtually all real world systems that use time series
matching as a subroutine, use a similarity measure which allows
warping. In retrospect, this is not very surprising, since
most real world processes, particularly biological processes,
can evolve at varying rates. For example, in bioinformat-ics
, it is well understood that functionally related genes will
express themselves in similar ways, but possibly at different
rates. Because of this, DTW is used for gene expression data
mining [1, 3]. Dynamic Time Warping is a ubiquitous tool
in the biometric/surveillance community. It has been used
for tracking time series extracted from video [7], classifying
handwritten text [16] and even fingerprint indexing [13].
While the above examples testify to the utility of a time
warped distance measure, they all echo the same complaint;
DTW has serious scalability issues. Work that attempted to
mitigate the large computational cost has appeared in [12]
and [21], where the authors use lower bounding measures to
speed up the execution of DTW. However, the lower bounds
can be loose approximations of the original distance, when
the data are normalized. In [15] a different approach is used
for indexing Time Warping, by using suffix trees. Nonetheless
, the index requires excessive disk space (about 10 times
the size of the original data).
The flexibility provided by DTW is very important, however
its efficiency deteriorates for noisy data, since by matching
all the points, it also matches the outliers distorting the
true distance between the sequences. An alternative approach
is the use of Longest Common Subsequence (LCSS),
which is a variation of the edit distance. The basic idea is
to match two sequences by allowing them to stretch, without
rearranging the order of the elements but allowing some
elements to be unmatched.
Using the LCSS of two sequences
, one can define the distance using the length of this
subsequence [6]. In [20] an internal memory index for the
LCSS has been proposed. It also demonstrated that while
the LCSS presents similar advantages to DTW, it does not
share its volatile performance in the presence of outliers.
Closest in spirit to our approach, is the work of [10] which,
however, only addresses 1D time-series. The author uses
constrained DTW as the distance function, and surrounds
the possible matching regions by a modified version of a
Piecewise Approximation, which is later stored as equi-length
MBRs in an R-tree.
However, by using DTW, such an
approach is susceptible to high bias of outliers. Also, the
217
fixed MBR size (although simplifies the index operations)
can lead to degenerate approximations of the original sequence
. Moreover, the embedding of the envelope in the
indexed sequences can slow the index construction time and
limit the user's query capabilities to a predefined warping
length. The use of LCSS as our primary similarity measure,
lends itself to a more natural use of the R-tree, where the
similarity estimates are simply computed by calculating the
MBR intersection areas. Since the index is not constructed
for a specific warping window, the user can pose queries with
variable warping length.
The purpose of this paper is to reconcile the best of both
worlds. We provide a framework that can support in the
same index, the LCSS, DTW and Euclidean distance functions
. The only aspect that changes, is the different representation
of the query for each distance measure.
DISTANCE MEASURES
In this section we present details of how the Dynamic
Time Warping and the LCSS model can be extended to describe
the similarity between trajectories.
3.1 Dynamic Time Warping for 2D trajectories
We describe an extension in 2D of the original DTW function
as described by Berndt and Clifford [4]. Let A and B
be two trajectories of moving objects with size n and m
respectively, where A = ((a
x,1
, a
y,1
), . . . , (a
x,n
, a
y,n
)) and
B = ((b
x,1
, b
y,1
), . . . , (b
x,m
, b
y,m
)). For a trajectory A, let
Head(A) = ((a
x,1
, a
y,1
), . . . , (a
x,n
1
, a
y,n
1
)).
Definition 1. The Time Warping between 2-dimensional
sequences A and B is:
DT W (A, B)
=
L
p
((a
x,n
, a
y,n
), (b
x,m
, b
y,m
)) +
min
{DTW(Head(A),
Head(B)), DT W (Head(A), B),
DT W (A, Head(B))
}
(1)
where L
p
is any p-Norm. Using dynamic programming and
constraining the matching region within , the time required
to compute DTW is O((n + m)). In order to represent an
accurate relationship of distances between sequences with
different lengths, the quantity in equation 1 is normalized
by the length of the warping path. The extension to n dimensions
is similar. In figure 2 we show an example of time
warping for two trajectories.
3.2 LCSS model for 2D trajectories
The original LCSS model refers to 1D sequences, we must
therefore extend it to the 2D case. In addition, the LCSS
paradigm matches discrete values, however in our model we
want to allow a matching, when the values are within a
certain range in space and time (note that like this, we also
avoid distant and degenerate matchings).
Definition 2. Given an integer and a real number 0 <
< 1, we define the LCSS
,
(A, B) as follows:
LCSS
,
(A, B) =
0
if A or B is empty
1 + LCSS
,
(Head(A), Head(B))
if
|a
x,n
b
x,m
| <
and
|a
y,n
b
y,m
| <
and
|n m|
max(LCSS
,
(Head(A), B),
LCSS
,
(A, Head(B))),
otherwise
0
50
100
150
0
500
1000
1500
100
200
300
400
500
600
X movement
Time
Y movement
Figure 2: The support of flexible matching in spatiotemporal
queries is very important. However, we
can observe that Dynamic Time Warping matches
all points (so the outliers as well), therefore distorting
the true distance. In contrast, the LCSS model
can efficiently ignore the noisy parts.
where sequences A and Head(A) are defined similarly as
before. The constant controls the flexibility of matching
in time and constant
is the matching threshold is space.
The aforementioned LCSS model has the same O((n+m))
computational complexity as the DTW, when we only allow
a matching window in time [6].
The value of LCSS is unbounded and depends on the
length of the compared sequences. We need to normalize
it, in order to support sequences of variable length. The
distance derived from the LCSS similarity can be defined as
follows:
Definition 3. The distance D
,
expressed in terms of
the LCSS similarity between two trajectories A and B is
given by:
D
,
(A, B) = 1 LCSS
,
(A, B)
min(n, m)
(2)
INDEX CONSTRUCTION
Even though imposing a matching window can help
speed up the execution, the computation can still be quadratic
when is a significant portion of the sequence's length.
Therefore, comparing a query to all the trajectories becomes
intractable for large databases. We are seeking ways to avoid
examining the trajectories that are very distant to our query.
This can be accomplished by discovering a close match to
our query, as early as possible. A fast pre-filtering step is
employed that eliminates the majority of distant matches.
Only for some qualified sequences will we execute the costly
(but accurate) quadratic time algorithm. This philosophy
has also been successfully used in [21, 10]. There are certain
preprocessing steps that we follow:
1. The trajectories are segmented into MBRs, which are
stored in an Rtree T.
2. Given a query Q, we discover the areas of possible
matching by constructing its Minimum Bounding Envelope
(M BE
Q
).
3. M BE
Q
is decomposed into MBRs that are probed in
the index T.
218
4. Based on the MBR intersections, similarity estimates
are computed and the exact LCSS (or DTW) is performed
only on the qualified trajectories.
The above notions are illustrated in figure 3 and we explain
in detail how they can be applied for the LCSS case in the
sections that follow.
E. LCSS Upper Bound Estimate = L1+L2+L3
A. Query Q
C. Envelope Splitting
B. Query Envelope
D. Sequence MBRs
L1
L2
L3
Figure 3: An example of our approach (in 1D for
clarity); A query is extended into a bounding envelope
, which in turn is also split into the resulting
MBRs. Overlap between the query and the index
MBRs suggest areas of possible matching.
4.1 Bounding the Matching Regions
Let us first consider a 1D time-series and let a sequence
A be (a
x,1
, . . . , a
x,n
). Ignoring for now the parameter , we
would like to perform a very fast LCSS
match between
sequence A and some query Q. Suppore that we replicate
each point Q
i
for time instances before and after time i.
The envelope that includes all these points defines the areas
of possible matching. Everything outside this envelope can
never be matched.
10
20
30
40
50
60
70
40 pts
6 pts
2
Q
A
Figure 4: The Minimum Bounding Envelope (MBE)
within in time and
in space of a sequence. Everything
that lies outside this envelope can never be
matched.
We call this envelope, the Minimum Bounding Envelope
(MBE) of a sequence. Also, once we incorporate the matching
within
in space, this envelope should extent
above
and below the original envelope (figure 4). The notion of the
bounding envelope can be trivially extended in more dimensions
, where M BE(, ) for a 2D trajectory Q = ((q
x,1
, q
y,1
),
. . . , (q
x,n
, q
y,n
) covers the area between the following time-series
:
EnvLow
MBE(, ) EnvHigh, where:
EnvHigh[i] = max(Q[j] + epsilon)
,
|ij|
EnvLow[j] = min(Q[j] epsilon)
,
|ij|
The LCSS similarity between the envelope of Q and a sequence
A is defined as:
LCSS(M BE
Q
, A) =
n
i=1
1
if A[i] within envelope
0
otherwise
For example, in figure 4 the LCSS similarity between M BE
Q
and sequence A is 46, as indicated in the figure. This value
represents an upper bound for the similarity of Q and A.
We can use the M BE
Q
to compute a lower bound on the
distance between trajectories:
Lemma 1. For any two trajectories Q and A the following
holds: D
,
(M BE
Q
, A)
D
,
(Q, A),
Proof (Sketch): D
,
(M BE
Q
, A) = 1
LCSS
,
(M BE
Q
,A)
min(
|Q|,|A|)
,
therefore it is sufficient to show that: LCSS
,
(M BE
Q
, A)
LCSS
,
(Q, A). This is true since M BE
Q
by construction
contains all possible areas within and
of the query Q.
Therefore, no possible matching points will be missed. 2
The previous lemma provides us with the power to create
an index that guarantees no false dismissals. However, this
lower bound refers to the raw data. In the sections that follow
, we will 'split' the M BE of a trajectory, into a number
of Minimum Bounding Rectangles (MBRs), to accommodate
their storage into a multidimensional R-tree. We will
show that the above inequality still holds between trajectory
MBRs.
The MBR generation procedure is orthogonal to our approach
, since any segmentation methodology can be applied
to our framework. Therefore, the description of the potential
MBR generation methods (and of our implementation
choice) will be delayed until later.
QUICK PRUNING OF DISSIMILAR TRA-JECTORIES
Suppose that we have an index with the segmented trajectories
and the user provides a query Q. Our goal is the
discovery of the k closest trajectories to the given query, according
to the LCSS similarity. A prefiltering step will aid
the quick discovery of a close match to the query, helping
us discard the distant trajectories without using the costly
quadratic algorithm. Therefore, in this phase, we compute
upper bound estimates of the similarity between the query
and the indexed sequences using their MBRs.
Below we describe the algorithm to find the closest trajectory
to a given query:
Input: Query Q, Index I with trajectory MBRs, Method
Output: Most similar trajectory to Q.
219
Box Env = constructM BE
,
(Q);
Vector V
Q
= CreateM BRs(Env);
// V
Q
contains a number of boxes.
Priority queue P Q
;
// P Q keeps one entry per trajectory sorted
// according to the similarity estimate
for each box B in V
Q
:
V = I.intersectionQuery(B);
// V contains all trajectory MBRs that intersect with B.
if Method == Exact: // upper bound
P Q
computeL-SimilarityEstimates(V, B);
else: // approximate
P Q
computeV-SimilarityEstimates(V, B);
BestSoF ar = 0; Best
;
while P Q not empty:
E
PQ.top;
if E.estimate < BestSoF ar: break;
else:
D = computeLCCS
,
(Q, E); // exact
if D > BestSoF ar:
BestSoF ar = D; Best
E;
Report Best;
The above algorithm can be adjusted to return the kNN
sequences, simply by comparing with the k
th
bestSoF ar
match. Next, we examine the possible similarity estimates.
Some of them guarantee that will find the best match (they
lower bound the original distance or upper bound the original
similarity), while other estimates provide faster but approximate
results.
5.1 Similarity Estimates
Here we will show how to compute estimates of the LCSS
similarity, based on the geometric properties of the trajectory
MBRs and their intersection. An upper bound estimate
is provided by the length of the MBR intersection and an
approximate estimate is given as a parameter of the intersecting
volume. To formalize these notions, first we present
several operators. Then we will use these operators to derive
the estimates.
5.1.1 Estimates for the LCSS
Each trajectory T can be decomposed into a number of
MBRs. The i
th
3D MBR of T consists of six numbers:
M
T,i
=
{t
l
, t
h
, x
l
, x
h
, y
l
, y
h
}. Now, let us define the operators
(c)
t
,
(p)
t
and
V
between two 3D MBRs M
P,i
and
M
R,j
, belonging to objects P and R, respectively:
1.
(c)
t
(M
P,i
, M
R,j
) =
||Intersection||
t
,
where M
R,j
.x
l
M
P,i
.x
l
M
R,j
.x
h
and
M
R,j
.x
l
M
P,i
.x
h
M
R,j
.x
h
and
M
R,j
.y
l
M
P,i
.y
l
M
R,j
.y
h
and
M
R,j
.y
l
M
P,i
.y
h
M
R,j
.y
h
or similarly by rotating M
R,j
M
P,i
Therefore, this operator computes the time intersection
of two MBR when one fully contains the other in
the x,y dimensions.
2.
(p)
t
(M
P,i
, M
R,j
) =
||Intersection||
t
, otherwise
3.
V
(M
P,i
, M
R,j
) =
||Intersection||
t
||Intersection||
x
||Intersection||
y
We can use upper bound or approximate estimates for the
similarity:
Common Volume Intersection
The Intersection of MBRs is fully
contained within one MBR
Intersection between two MBRs
time
y
x
Figure 5:
Top left:
Intersection recorded in list
L
t,partial
.
Top right: Intersection recorded in list
L
t,complete
. Bottom left: Percentage of Volume Intersection
kept in L
V
.
1. Upper bound estimates (L-similarity estimate).
Such estimates are computed using the following data-structures:
The list L
t,complete
, an element L(P ) of which is defined
as:
L(P ) =
m
n
M
Q,m
(c)
t
M
P,n
where Q is a query and P is a trajectory in the index. So the
list stores for each trajectory the total time that its MBRs
intersected with the query's MBRs. We record into this list
only the intersections, where a query MBR is fully contained
in all spatial dimensions by a trajectory MBR (or vice versa
-it is equivalent. See figure 5, top right).
The list L
t,partial
, an element L(P ) of which is defined as:
L(P ) =
m
n
M
Q,m
(p)
t
M
P,n
This list records for each sequence the total intersection
in time for those query MBRs that are not fully contained
within the x,y dimensions by the trajectory MBRs (or vice
versa. Figure 5, top left).
Regarding a query Q, for any trajectory P the sum of
L
t,complete
(P ) + L
t,partial
(P ) will provide an upper bound
on the similarity of P and Q.
The reason for the distinction of the L-similarity estimate
in two separate lists derives from the fact that the estimates
stored in list L
t,partial
can significantly overestimate
the LCSS similarity. If one wishes to relax the accuracy,
in favor of enhanced performance, it is instructive to give a
weight 0 < w
p
< 1 to all estimates in list L
t,partial
. Even
though now we may miss the best match to our query, we
are going to find a close match in less time. This weighted
approach is used when we are seeking for approximate, but
very good quality answers, however it will not be explained
further due to space limitations.
2. Approximate estimates (V-similarity estimate).
This second estimate is based on the intersecting volume of
the MBRs. This type of estimates are stored in list L
V
:
Any element L
V
(P ) of list L
V
records similarity estimates
between trajectory P and query Q, based on the total volume
intersection between the MBRs of P and Q.
L(P ) =
1
length(P )
m
n
M
Q,m
V
M
P,n
||M
Q,m
||
V
||M
Q,m
||
t
220
where
||M||
V
denotes the volume of MBR M and
||M||
t
its
length on the time axis.
The L-similarity overestimates the LCSS
,
between two
sequences A and B and so it can be deployed for the design
of an index structure.
Lemma 2. The use of the L-similarity estimate upper bounds
the LCSS
,
similarity between two sequences A and B and
therefore does not introduce any false dismissals.
The V-similarity estimate can be used for approximate
query answering. Even though it does not guarantee the
absence of false dismissals, the results will be close to the
optimal ones with high probability. Also, because this estimate
provides a tighter approximation to the original distance
, we expect faster response time. Indeed, as we show in
the experimental section, the index performance is boosted,
while the error in similarity is frequently less then 5%.
5.2 Estimates for the DTW
When the distance function used is the Time Warping,
using the index we obtain a lower bound of the actual distance
. In this case we have the inverse situation from the
LCSS; instead of calculating the degree of overlap between
the MBRs of the indexed trajectories and the query, we evaluate
the distance between the MBRs. The overall distance
between the MBRs underestimates the true distance of the
trajectories, and no false dismissals are introduced. Using
the MBRs we can also calculate upper bound estimates on
the distance, which hadn't been exploited in previous work
[10, 22]. Sequences with lower bound larger than the smallest
upper bound can be pruned. With this additional prefiltering
step we can gain on average an additional 10-15%
speedup in the total execution time.
Due to space limitations only a visual representation of
this approach is provided in figure 6.
MBR GENERATION
Given a multidimensional time-series (or an MBE) our
objective is to minimize the volume of the sequence using
k MBRs. Clearly, the best approximation of a trajectory
(or an MBE) using a fixed number of MBRs is the set of
MBRs that completely contain the sequence and minimize
the volume consumption. We can show the following lemma:
Lemma 3. Minimizing the volume of the Minimum Bounding
Envelope, minimizes the expected similarity approximation
error.
Three different approaches are considered:
1. k-Optimal. We can discover the k MBRs of a sequence
that take up the least volume, using a dynamic programming
algorithm that requires O(n
2
k) time ([8]), where n is the
length of the given sequence. Since this approach is not
reasonable for large databases, we are motivated to consider
approximate and faster solutions.
2. Equi-Split. This technique produces MBRs of fixed
length l. It is a simple approach with cost linear in the
length of a sequence. However, in pathological cases increasing
the number of splits can result to larger space utilization
,therefore the choice of the MBR length becomes a
critical parameter (see figure 7 for an example).
A. Query Q
B. Query Envelope
C. Envelope Splitting
D. Sequence MBRs
E. MINDIST(Q,R)
F. MAXDIST(Q,R)
Figure 6: A visual intuition of the DTW indexing
technique (the one-dimensional case is shown for
clarity).
The original query (A) is enclosed in a
minimum-bounding envelope (B) like the LCSS approach
. The MBE is split into its MBRs using equi
or greedy split (fig. (C)). The candidate sequences
in the database have their MBRs stored in the index
(D). Between the query and any sequence in
the index, the minimum and maximum distance can
be quickly determined by examining the distance
between the MBRs and the query's bounding envelope
, as represented by the arrows in (E) and (F).
3. Greedy-Split. The Greedy approach is our implementation
choice in this paper. Initially we assign an MBR to
each of the n sequence points and at each subsequent step
we merge the consecutive MBRs that will introduce the least
volume consumption. The algorithm has a running time of
O(nlogn). We can see a sketch of the method in fig. 8. Al-ternatively
, instead of assigning the same number of splits
to all objects, according to our space requirements we can
assign a total of K splits to be distributed among all objects.
This method can provide better results, since we can assign
more splits for the objects that will yield more space gain.
Also, this approach is more appropriate when one is dealing
with sequences of different lengths. The complexity of this
approach is O(K + N logN ), for a total of N objects ([8]).
Input: A spatiotemporal trajectory T and an integer k denoting
the number of final MBRs.
For 0
i < n compute the volume of the MBR produced by
merging T
i
and T
i+1
. The results are stored in a priority queue.
While #M BRs < k: Using the priority queue, merge the pair
of consecutive MBRs that yield the smallest increase in volume.
Delete the two merged MBRs and insert the new one in the priority
queue.
Output: A set of MBRs that cover T .
Figure 8: The greedy algorithm for producing k
MBRs that cover the trajectory T .
After a trajectory is segmented the MBRs can be stored
in a 3D-Rtree. Using the greedy split each additional split
will always lead to smaller (or equal) volume (figure 7). A
similar greedy split algorithm is also used for splitting the
MBE of the query trajectory Q.
221
(a)
Equi-Split, 8 MBRs, Gain = 5.992
(b)
Equi-Split, 9 MBRs, Gain = 5.004
(c)
Greedy-Split, 8MBRs, Gain = 9.157
(d)
Greedy-Split, 9MBRs, Gain = 10.595
Figure 7: (a): 8 MBRs produced using equi-Split. The volume gain over having 1 MBR is 5.992. (b):
Segmenting into 9 MBRs decreases the volume gain to 5.004. So, disk space is wasted without providing
a better approximation of the trajectory. (c): 8 MBRs using greedy-Split. The volume gain over having 1
MBR is 9.157. (d): Every additional split will yield better space utilization. Segmentation into 9 MBRs
increases volume gain to 10.595.
SUPPORTING MULTIPLE MEASURES
The application of the Minimum Bounding Envelope only
on the query suggests that user queries are not confined to
a predefined and rigid matching window . The user can
pose queries of variable warping in time. In some datasets,
there is no need to perform warping, since the Euclidean
distance performs acceptably [11]. In other datasets, by
using the Euclidean distance we can find quickly some very
close matches, while using warping we can distinguish more
flexible similarities. So, we can start by using a query with
= 0 (no bounding envelope), and increase it progressively
in order to find more flexible matches (figure 9).
Therefore, our framework offers the unique advantage that
multiple distance functions can be supported in a single index
. The index sequences have been segmented without any
envelope applied on them and never have to be adjusted
again. For different measures, the aspects that change are,
the creation of the query envelope and the type of operation
between MBRs. In order to pose queries based on Euclidean
distance we follow the steps:
The query is segmented with no envelope applied on it.
The minDist and maxDist estimators for the Euclidean
distance are derived by calculating the distance between the
query and index MBRs, just like in the DTW case.
0
50
100
150
200
250
300
350
200
150
100
50
0
50
100
Index Trajectory
Query
Figure 9: By incorporating the bounding envelope
on the query, our approach can support Euclidean
distance, constrained or full warping. This is accomplished
by progressively expanding the MBE.
EXPERIMENTAL EVALUATION
In this section we compare the effectiveness of various
splitting methods and we demonstrate the superiority of our
lower bounding technique (for the DTW) compared to other
proposed lower bounds. We describe the datasets we used
and present comprehensive experiments regarding the index
performance for the two similarity estimates. In addition,
we evaluate the accuracy of the approximate estimates. All
experiments conducted were run on an AMD Athlon 1.4 Ghz
with 1GB RAM and 60GB of hard drive.
1. ASL
2. Buoy Sensor
3. Video Track 1
4. Flutter
5. Marine Mammals
6. Word Tracking
7. Random Walk
8. Video Track 2
Figure 10: Datasets used for testing the efficiency
of various MBR generation methods.
8.1 MBR Generation Comparison
The purpose of our first experiment is to test the space
consumption of the presented MBR generation methods. We
have used eight datasets with diverse characteristics, in order
to provide objective results.
We evaluate the space consumption, by calculating the
"Average Volume Gain" (AvgV olGain), which is defined
as the percentage of volume when using i MBRs, over the
volume when using only 1 MBR, normalized by the maximum
gain provided over all methods (for various number of
splits).
Random
Equi
Greedy
1
2
3
4
5
6
7
8
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Dataset
Average Volume Gain
Figure 11: The greedy-split MBR generation algorithm
presents the highest volume gain, by producing
MBRs that consume consistently less space, over
a number of datasets and for diverse number of generated
MBRs
222
DATASET
EQ
s20,d5
GR
s20,d5
EQ
s40,d5
GR
s40,d5
EQ
s20,d5
GR
s20,d5
EQ
s40,d5
GR
s40,d5
LB-Kim
LB-Yi
LCSS
DTW
ASL
0.732
0.799
0.825
0.856
0.449
0.632
0.588
0.756
0.1873
0.2530
VT1
0.260
0.339
0.453
0.511
0.087
0.136
0.230
0.266
0.0838
0.1692
Marine
0.719
0.750
0.804
0.814
0.226
0.506
0.308
0.608
0.2587
0.4251
Word
0.627
0.666
0.761
0.774
0.311
0.361
0.466
0.499
0.0316
0.2116
Random
0.596
0.652
0.701
0.741
0.322
0.384
0.440
0.491
0.1389
0.2067
VT2
0.341
0.431
0.498
0.569
0.210
0.296
0.363
0.437
0.2100
0.5321
Table 1: Some indicative results of how close our similarity estimates are to the exact value (for 20 and 40
splits, & = 5%). For all datasets the greedy-split approach provides the closest similarity estimates to the
actual similarity.
AvgV olGain is a number between 0 and 1, where higher
numbers indicate increased volume gain (or less space consumption
) against the competitive methods.
In figure 11
we observe the average volume gain for the eight datasets.
The greedy-split algorithm produced MBRs that took at
least half the space, compared to equi-split. The equi-split
offers slightly better results, than producing MBRs at random
positions. The volume gain of greedy-split was less,
only for the buoy sensor, which is a very busy and unstruc-tured
signal. This experiment validates that our choice to
use the greedy-split method was correct. Since, the indexed
MBR trajectories will take less space, we also expect tighter
similarity estimates, therefore fewer false positives.
8.2 Tightness of Bounds
In table 1 we show how close our similarity estimates are
(for LCSS and DTW) to the actual similarity between sequences
. Numbers closer to 1, indicate higher similarity
to the value returned by the exact algorithm. To our best
knowledge, this paper introduces the first upper bounding
technique for the LCSS. For DTW there have been a few
approaches to provide a lower bound of the distance; we refer
to them as LB-Kim [12] and LB-Yi [21]. These lower
bounds originally referred to 1D time-series; here we extend
them in more dimensions, in order to provide unambiguous
results about the tightness of our estimates. Note that the
previously proposed methods operate on the raw data. Our
approach can still provide tighter estimates, while operating
only on the trajectory MBRs. Using the raw data our
experiments indicate that we are consistently 2-3 times better
than the best alternative approach. However, since our
index operates on the segmented time-series we only report
the results on the MBRs.
The greedy-split method approximates the similarity consistently
tighter than the equi-split. In table 1 only the
results for = 5% of the query's length are reported, but
similar results are observed for increasing values of . It is
evident from the table that using our method we can provide
very tight lower bounds of the actual distance.
8.3 Matching Quality
We demonstrate the usefulness of our similarity measures
in a real world dataset. The Library of Congress maintains
thousands of handwritten manuscripts, and there is an increasing
interest to perform automatic transcribing of these
documents. Given the multiple variations of each word and
due to the manuscript degradations, this is a particularly
challenging task and the need for a flexible and robust distance
function is essential.
We have applied the LCSS and DTW measures on word
Figure 12:
Results for a real world application.
3NN reported for each query, using Dynamic Time
Warping to match features extracted from scanned
manuscript words.
images extracted from a 10 page scanned manuscript. 4-dimensional
time-series features have originally been extracted
for each word. Here we maintain the 2 least correlated time-series
features and treat each word as a trajectory. In figure
12 we observe the 3-KNN results using DT W for various
word queries. The results are very good, showing high accuracy
even for similarly looking words. Analogous results
have been obtained using the LCSS.
8.4 Index performance
We tested the performance of our index using the upper
bound and the approximate similarity estimates, and compared
it to the sequential scan. Because of limited space,
the majority of the figures record the index performance using
the LCSS as a similarity measure. The performance
measure used is the total computation time required for the
index and the sequential scan to return the nearest neighbor
for the same one hundred queries. For the linear scan,
one can also perform early termination of the LCSS (or the
DTW) computation. Therefore, the LCSS execution can
be stopped at the point where one is sure that the current
sequence will not be more similar to the query than the
bestSoFar. We call this optimistic linear scan. Pessimistic
linear scan, is the one than does not reuse the previously
computed similarity values and can be an accurate time
estimate, when the query match resides at the end of the
dataset. We demonstrate the index performance relative to
223
1024
2048
4096
8192
16384
32768
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Dataset size
Time Ratio Compared to Linear Scan
=5%
Optimistic
Pessimistic
Linear Scan
1024
2048
4096
8192
16384
32768
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Dataset size
Time Ratio Compared to Linear Scan
=10%
Optimistic
Pessimistic
Linear Scan
1024
2048
4096
8192
16384
32768
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Dataset size
Time Ratio Compared to Linear Scan
=20%
Optimistic
Pessimistic
Linear Scan
Figure 13: Index performance. For small warping windows the index can be up to 5 times faster than
sequential scan without compromising accuracy. The gray regions indicate the range of potential speedup.
1024
2048
4096
8192
16384
32768
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Dataset size
Time Ratio Compared to Linear Scan
=5%
Optimistic
Pessimistic
Linear Scan
1024
2048
4096
8192
16384
32768
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Dataset size
Time Ratio Compared to Linear Scan
=10%
Optimistic
Pessimistic
Linear Scan
1024
2048
4096
8192
16384
32768
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Dataset size
Time Ratio Compared to Linear Scan
=20%
Optimistic
Pessimistic
Linear Scan
Figure 14: Using the approximate similarity estimates the response time can be more than 7 times faster.
both types of linear scan, because this provides a realistic
upper or lower bound on the index speedup.
The dataset we used contained 2
10
. . . 2
16
trajectories. Taking
under consideration that the average trajectory size is
around 500 points, this resulted to a database with more
than 16 million 2D points. The trajectories have been normalized
by subtracting the average value in each direction of
movement. All data and queries can be obtained by emailing
the first author.
Mixed two-dimensional Time-series (2D-Mixed). This
second dataset consists of time-series of variable length, ranging
from less than 100 points to over 1000 points. The
dataset is comprised by the aggregation of the eight datasets
we used for comparing the MBR generation methods. Since
the total number of these trajectories is less than 200, we
have used them as seeds to generate increasingly larger datasets.
We create multiple copies of the original trajectories by incorporating
the following features:
Addition of small variations in the original trajectory
pattern
Addition of random compression and decompression in
time
The small variations in the pattern were added by interpolating
peaks of Gaussian noise using splines. In this manner
we are able to create the smooth variations that existed in
the original datasets.
8.4.1 Results on the upper bound Estimates
The index performance is influenced be three parameters:
the size of the dataset, the warping length (as a percentage
of the query's length) and the number of splits. For all
experiments the parameter
(matching in space) was set
to std/2 of the query, which provided good and intuitive
results.
Dataset size: In figure 13 we can observe how the
performance of the index scales with the database size (for
various lengths of matching window). We record the index
response time relative to both optimistic and pessimistic linear
scan. Therefore, the gray region in the figures indicates
the range of possible speedup. It is evident that the early
termination feature of the sequential scan can significantly
assist its performance. The usefulness of an index becomes
obvious for large dataset sizes, where the quadratic computational
cost dominates the I/O cost of the index. For these
cases our approach can be up to 5 times faster than linear
scan. In figure 15 we also demonstrate the pruning power of
the index, as a true indicator (not biased by any implementation
details) about the efficacy of our index. Using the
index we perform 2-5 times fewer LCSS computations than
the linear scan. We observe similar speedup when using the
DTW as the distance function in figure 17.
Parameter : The index performance is better for
smaller warping lengths (parameter ). The experiments
record the performance for warping from 5% to 20% of the
query's length. Increasing values signify larger bounding
envelopes around the query, therefore larger space of search
and less accurate similarity estimates. The graphs suggest
that an index cannot not be useful under full warping (when
the data are normalized).
Number of Splits: Although greater number of MBRs
for each trajectory implies better volume utilization, nonetheless
more MBRs also lead to increased I/O cost. When we
are referring to x% splits, it means that we have assigned a
total of 100/x(
n
i=1
(
||T
i
||)) splits, for all sequences T
i
. In
our figures we provide the 5% splits scenario for the MBRs,
which offers better performance than 10% and 20% splits,
since for the last two cases the I/O cost negates the effect of
the better query approximation. The index space requirements
for 5% splits is less than a quarter of the dataset size.
8.4.2 Results on the approximate Estimates
Here we present the index performance when the volume
intersections of the MBRs are used as estimates of the sim-224
1024
2048
4096
8192
16384
32768
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
Dataset size
Ratio of LCSS performed by the index
Pruning Power compared to Linear Scan
5% splits
10% splits
20% splits
Linear Scan
=5%
=10%
=20%
Figure 15: Each gray band indicates
(for a certain warping window
) the percentage of LCSS
computations conducted by the index
compared to linear scan.
1024
2048
4096
8192
16384
32768
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Dataset size
Average similarity Error
Similarity Error, 5% splits
=5%
=10%
=20%
Figure 16: Using the V-similarity
estimate, we can retrieve answers
faster with very high accuracy.
The LCSS similarity is very close
(2-10%) to the exact answer returned
by the sequential scan.
1024
2048
4096
8192
16384
32768
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Dataset size
Time Ratio Compared to Linear Scan
=5%
Optimistic
Pessimistic
Linear Scan
Figure 17: Index Performance using
DTW as the distance measure.
( = 5%). We can observe up to 5
times speedup.
ilarity and the results are shown in figure 14. We observe
that using this approximate similarity estimate, our index
performance is boosted up. The use of the V-similarity estimate
leads to more tight approximations of the original
similarity compared to the L-similarity estimate, however
now we may miss finding the best match.
Naturally, comes the question of the quality of the results
. We capture this by calculating the absolute difference
between the similarity of the best match returned by the
index, and the best match found by the sequential scan for
each query. Then we average the results over a number of
queries
|q|. Therefore, the Average Similarity Error (ASE)
is:
ASE =
1
|q|
|q|
i=1
(
|BestMatch
index
BestM atch
exhaustive
|)
The results are shown in figure 16. We can see that the
similarity returned by the V-similarity estimate is approxi-mately
within 5% of the actual similarity (5% splits used).
Therefore, by providing two similarity estimates the user
can decide for the trade-off between the expedited execution
time and the quality of results. Since by using the latter estimator
we can significantly increase the performance of the
index, this is the approach we recommend for mining large
datasets.
CONCLUSIONS AND FUTURE WORK
In this paper we have presented an external memory indexing
method for discovering similar multidimensional time-series
. The unique advantage of our approach is that it
can accommodate multiple distance measures. The method
guarantees no false dismissals and depicts a significant execution
speed up for the LCSS and DTW compared to sequential
scan. We have shown the tightness of our similarity
estimates and demonstrated the usefulness of our measures
for challenging real world applications. We hope that our
effort can act as a bridge between metric and non-metric
functions, as well as a tool for understanding better their
strengths and weaknesses. In the future we plan to investigate
the combination of several heuristics, in order to provide
even tighter estimates.
Acknowledgements: We would like to thank Margrit Betke
for providing us the Video Track I and II datasets. We also
feel obliged to T. Rath and R. Manmatha for kindly providing
the manuscript words dataset.
REFERENCES
[1] J. Aach and G. Church. Aligning gene expression time series
with time warping algorithms. In Bioinformatics, Volume 17,
pages 495508, 2001.
[2] O. Arikan and D. Forsyth. Interactive motion generation from
examples. In Proc. of ACM SIGGRAPH, 2002.
[3] Z. Bar-Joseph, G. Gerber, D. Gifford, T. Jaakkola, and
I. Simon. A new approach to analyzing gene expression time
series data. In Proc. of 6th RECOMB, pages 3948, 2002.
[4] D. Berndt and J. Clifford. Using Dynamic Time Warping to
Find Patterns in Time Series. In Proc. of KDD Workshop,
1994.
[5] M. Betke, J. Gips, and P. Fleming. The camera mouse: Visual
tracking of body features to provide computer access for people
with severe disabilities. In IEEE Transactions on Neural
Systems and Rehabilitation Engineering, Vol. 10, No. 1, 2002.
[6] G. Das, D. Gunopulos, and H. Mannila. Finding Similar Time
Series. In Proc. of the First PKDD Symp., pages 88100, 1997.
[7] D. Gavrila and L. Davis. Towards 3-d model-based tracking
and recognition of human movement: a multi-view approach. In
Int. Workshop on Face and Gesture Recognition.
[8] M. Hadjieleftheriou, G. Kollios, V. Tsotras, and D. Gunopulos.
Efficient indexing of spatiotemporal objects. In Proc. of 8th
EDBT, 2002.
[9] T. Kahveci, A. Singh, and A. Gurel. Similarity searching for
multi-attribute sequences. In Proc. of SSDBM, 2002.
[10] E. Keogh. Exact indexing of dynamic time warping. In Proc. of
VLDB, 2002.
[11] E. Keogh and S. Kasetty. On the need for time series data
mining benchmarks: A survey and empirical demonstration. In
Proc. of SIGKDD, 2002.
[12] S. Kim, S. Park, and W. Chu. An index-based approach for
similarity search supporting time warping in large sequence
databases. In In Proc. of 17th ICDE, 2001.
[13] Z. Kov
acs-Vajna. A fingerprint verification system based on
triangular matching and dynamic time warping. In IEEE
Transactions on PAMI, Vol. 22, No. 11.
[14] S.-L. Lee, S.-J. Chun, D.-H. Kim, J.-H. Lee, and C.-W. Chung.
Similarity Search for Multidimensional Data Sequences. Proc.
of ICDE, pages 599608, 2000.
[15] S. Park, W. Chu, J. Yoon, and C. Hsu. Efficient Searches for
Similar Subsequences of Different Lengths in Sequence
Databases. In Proc. of ICDE, pages 2332, 2000.
[16] T. Rath and R. Manmatha. Word image matching using
dynamic time warping. In Tec Report MM-38. Center for
Intelligent Information Retrieval, University of
Massachusetts Amherst, 2002.
[17] J. F. Roddick and K. Hornsby. Temporal, Spatial and
Spatio-Temporal Data Mining. 2000.
[18] M. Shimada and K. Uehara. Discovery of correlation from
multi-stream of human motion. In Discovery Science 2000.
[19] R. E. Valdes-Perez and C. A. Stone. Systematic detection of
subtle spatio-temporal patterns in time-lapse imaging ii.
particle migrations. In Bioimaging 6(2), pages 7178, 1998.
[20] M. Vlachos, G. Kollios, and D. Gunopulos. Discovering similar
multidimensional trajectories. In Proc. of ICDE, 2002.
[21] B.-K. Yi, H. V. Jagadish, and C. Faloutsos. Efficient retrieval
of similar time sequences under time warping. In Proc. of
ICDE, pages 201208, 1998.
[22] Y. Zhu and D. Shasha. Query by humming: a time series
database approach. In Proc. of SIGMOD, 2003.
225
| Dynamic Time Warping;indexing;trajectory;distance function;Dynamic Time Warping (DTW);similarity;Longest Common Subsequence;Trajectories;Longest Common Subsequence (LCSS);measure |
111 | Information Retrieval for Language Tutoring: An Overview of the REAP Project | INTRODUCTION Typical Web search engines are designed to run short queries against a huge collection of hyperlinked documents quickly and cheaply, and are often tuned for the types of queries people submit most often [2]. Many other types of applications exist for which large, open collections like the Web would be a valuable resource. However, these applications may require much more advanced support from information retrieval technology than is currently available. In particular, an application may have to describe more complex information needs, with a varied set of properties and data models, including aspects of the user's context and goals. In this paper we present an overview of one such application, the REAP project, whose main purpose is to provide reader-specific practice for improved reading comprehension. (REAP stands for REAder-specific Practice.) A key component of REAP is an advanced search model that can find documents satisfying a set of diverse and possibly complex lexical constraints, including a passage's topic, reading level (e.g. 3rd grade), use of syntax (simple vs. complex sentence structures), and vocabulary that is known or unknown to the student. Searching is performed on a database of documents automatically gathered from the Web which have been analyzed and annotated with a rich set of linguistic metadata. The Web is a potentially valuable resource for providing reading material of interest to the student because of its extent, variety, and currency for popular topics. | SYSTEM DESCRIPTION
Here we describe the high-level design of the REAP
information retrieval system, including document database
requirements and construction, annotations, and a brief
description of the retrieval model.
2.1 Database Construction
Our goal is to present passages that are interesting to students,
whether they are on current topics such as pop stars or sports
events, or related to particular classroom projects. To this end,
we use the Web as our source of reading practice materials
because of its extent, variety, and currency of information.
We want coverage of topics in the database to be deeper in
areas that are more likely to be of interest to students.
Coverage of other areas is intended to be broad, but more
shallow. We therefore gather documents for the database
using focused crawling [3]. The current prototype uses a
page's reading difficulty to set priority for all links from that
page equally, based on the distance from the target reading
level range. We plan to explore more refined use of
annotations to direct the crawl on a link-by-link basis. In our
prototype, we collected 5 million pages based on an initial set
of 20,000 seed pages acquired from the Google Kids Directory
[7]. Our goal is to have at least 20 million pages that focus on
material for grades 1 through 8. The document database must
be large enough that the most important lexical constraints are
satisfied by at least a small number of pages. Data annotation
is currently performed off-line at indexing time. The specific
annotations for REAP are described in Section 2.2.
Once the documents are acquired, they are indexed using an
extended version of the Lemur IR Toolkit [9]. We chose
Lemur because of its support for language model-based
retrieval, its extensibility, and its support for incremental
indexing, which is important for efficient updates to the
database. Annotations are currently stored as Lemur
properties, but later versions will take advantage of the
enhancements planned for support of rich document structure,
described in Section 2.3.
2.2 Linguistic Annotations
In addition to the underlying text, the following linguistic
annotations are specified as features to be indexed:
Basic text difficulty within a document section or region.
This is calculated using a new method based on a mixture
of language models [4] that is more reliable for Web
pages and other non-traditional documents than typical
reading difficulty measures.
Grammatical structure. This includes part-of-speech tags
for individual words as well as higher-level parse
structures, up to sentence level.
Document-level attributes such as title, metadata
keywords, and ratings.
544
Topic category. This would involve broad categories such
as fiction/non-fiction [5] or specific topics, perhaps based
on Open Directory.
Named entity tags. We use BBN's Identifinder [1] for
high-precision tagging of proper names.
We may also look at more advanced attributes such as text
coherence and cohesion [6].
2.3 Query and Retrieval Models
A typical information need for the REAP system might be
described as follows:
Find a Web page about soccer, in American English,
with reading difficulty around the Grade 3 level. The
text should use both passive- and active-voice
sentence constructions and should introduce about
10% new vocabulary relative to the student's
known-vocabulary model. The page's topic is less
important than finding pages that practice the words:
for example, an article on another sport that satisfies
the other constraints would also be acceptable.
Information needs in REAP will be modeled as mixtures of
multiple word histograms, representing different sources of
evidence, as well as document-level or passage-level
constraints on attributes such as reading difficulty. There is
precedent for using word histograms to specify information
needs: indeed, query expansion is one example of this. More
specifically, related work includes language model-based
techniques such as relevance models [8].
No current Web-based search engine is able to make use of
combinations of lexical constraints and language models in
this way, on such a large scale. To support this, we are making
extensions to Lemur that include:
1.
Retrieval models for rich document structure, which
includes nested fields of different datatypes where each
field may be associated with its own language model.
2.
More detailed retrieval models in which we skew
language models towards the appropriate grade level,
topic, or style.
3.
The use of user model descriptions as context for a query.
2.4 User Profiles
In the current prototype, we model a reader's topic interests,
reading level, and vocabulary acquisition goals using simple
language models. For example, we model the curriculum as a
word histogram. Although crude, this captures word-frequency
information associated with general reading
difficulty, as well as capturing topics that are the focus of the
curriculum at each grade level. We plan to add more complex
aspects to user profiles, including more specific lexical
constraints such as grammar constructs and text novelty. The
models can be updated incrementally as the student's interests
evolve and they make progress through the curriculum.
EVALUATION METHODS
Evaluation of the end-to-end REAP system will be via a series
of three year-long studies with both adults and children. The
adult studies will provide feedback on vocabulary matching
and comprehension, and the child studies will test the
hypothesis that children will read adaptively to texts that vary
in vocabulary demands, where those texts that closely reflect
the reader's interests and comprehension can be used to
support improved comprehension and vocabulary growth.
CONCLUSION
The REAP project is intended to advance the state of the art in
information retrieval, as well as research in reading
comprehension, by bringing together practical user models of
student interests, vocabulary knowledge and growth, and other
aspects of reading, with interesting material from large, open
collections like the World Wide Web. This type of system is a
valuable new research tool for educational psychologists and
learning scientists, because it gives much greater control over
how instructional materials are selected. This in turn allows
testing of instructional hypotheses, such as the effect of 10%
vocabulary stretch, which have been impractical to test in the
past. The work also has direct application to other areas of
language learning, such as English as a Second Language
training. More broadly, however, we believe the REAP project
is a important first step toward enabling richer user and task
models than currently available with ad-hoc search systems.
ACKNOWLEDGMENTS
We thank our collaborators Maxine Eskenazi, Charles Perfetti
and Jonathan Brown; John Cho and Alexandros Ntoulas of
UCLA for their crawler code; and the anonymous reviewers.
This work was supported by U.S. Dept. of Education grant
R305G03123. Any opinions, findings, conclusions, or
recommendations expressed in this material are the authors'
and do not necessarily reflect those of the sponsors.
REFERENCES
[1] Bikel, D. M., Miller, S., Schwartz, R., Weischedel, R. M.,
Nymbol: A high-performance learning name-finder. In
Proceedings of the 5th Conference on Applied Natural
Language Processing, 194 - 201, 1997.
[2] Broder, A. A taxonomy of Web search. In SIGIR Forum,
36(2). 3 - 10, 2002.
[3] Chakrabarti, S., van der Berg, M., & Dom, B. Focused
crawling: a new approach to topic-specific web resource
discovery. In Proc. of the 8th International World-Wide Web
Conference (WWW8), 1999.
[4] Collins-Thompson, K., & Callan, J. A language modeling
approach to predicting reading difficulty. Proceedings of
HLT/NAACL 2004, Boston, USA, 2004.
[5] Finn, A., Kushmerick, N. & Smyth, B. Fact or fiction:
Content classification for digital libraries. Joint DELOS-NSF
Workshop on Personalisation and Recommender Systems in
Digital Libraries (Dublin), 2001.
[6] Foltz, P. W., Kintsch, W., Landauer, T. K. Analysis of text
coherence using Latent Semantic Analysis. Discourse
Processes 25(2-3), 285 - 307, 1998.
[7] Google Kids Directory.
http://directory.google.com/Top/Kids_and_Teens/
[8] Lavrenko, V., and Croft, B. Relevance-based language
models. In Proc. of the 24th Annual International ACM
SIGIR Conference, New Orleans, 120 - 127, 2001.
[9] Ogilvie, P. and Callan, J. Experiments using the Lemur
Toolkit. In Proc.of the 10th Text Retrieval Conference, TREC
2001. NIST Special Publication 500-250, 103-108, 2001.
545
| computer-assisted learning;user model;searching;reading comprehension;Information retrieval;information retrieval |
112 | Categorizing Web Queries According to Geographical Locality | Web pages (and resources, in general) can be characterized according to their geographical locality. For example, a web page with general information about wildflowers could be considered a global page, likely to be of interest to a ge-ographically broad audience. In contrast, a web page with listings on houses for sale in a specific city could be regarded as a local page, likely to be of interest only to an audience in a relatively narrow region. Similarly, some search engine queries (implicitly) target global pages, while other queries are after local pages. For example, the best results for query [wildflowers] are probably global pages about wildflowers such as the one discussed above. However, local pages that are relevant to, say, San Francisco are likely to be good matches for a query [houses for sale] that was issued by a San Francisco resident or by somebody moving to that city. Unfortunately, search engines do not analyze the geographical locality of queries and users, and hence often produce sub-optimal results. Thus query [wildflowers ] might return pages that discuss wildflowers in specific U.S. states (and not general information about wildflowers), while query [houses for sale] might return pages with real estate listings for locations other than that of interest to the person who issued the query. Deciding whether an unseen query should produce mostly local or global pages--without placing this burden on the search engine users--is an important and challenging problem, because queries are often ambiguous or underspecify the information they are after. In this paper, we address this problem by first defining how to categorize queries according to their (often implicit) geographical locality. We then introduce several alternatives for automatically and efficiently categorizing queries in our scheme, using a variety of state-of-the-art machine learning tools. We report a thorough evaluation of our classifiers using a large sample of queries from a real web search engine, and conclude by discussing how our query categorization approach can help improve query result quality. | INTRODUCTION
Web pages (and resources, in general) can be characterized
according to their geographical locality. For example, a
web page with general information about wildflowers could
be considered a global page, likely to be of interest to a ge-ographically
broad audience. In contrast, a web page with
listings on houses for sale in a specific city could be regarded
as a local page, likely to be of interest only to an audience in
a relatively narrow region. Earlier research [9] has addressed
the problem of automatically computing the "geographical
scope" of web resources.
Often search engine queries (implicitly) target global web
pages, while other queries are after local pages. For example,
the best results for query [wildflowers] are probably global
pages about wildflowers discussing what types of climates
wildflowers grow in, where wildflowers can be purchased,
or what types of wildflower species exist. In contrast, local
pages that are relevant to, say, San Francisco are likely to
be good matches for a query [houses for sale] that was issued
by a San Francisco resident, or by somebody moving
to San Francisco, even if "San Francisco" is not mentioned
in the query. The user's intent when submitting a query
may not be always easy to determine, but if underspecified
queries such as [houses for sale] can be detected, they can
be subsequently modified by adding the most likely target
geographical location or by getting further user input to cus-tomize
the results.
Unfortunately, search engines do not analyze the geographical
locality of queries and users, and hence often produce
sub-optimal results, even if these results are on-topic
and reasonably "popular" or "authoritative." Thus query
[wildflowers] might return pages that discuss wildflowers in
specific U.S. states (and not general information about wildflowers
). In fact, as of the writing of this paper, the first
10 results that Google provides for this query include 5
pages each of which discusses wildflowers in only one U.S.
325
state (e.g., "Texas Wildflowers"). Similarly, the top 10 results
that Google returns for query [houses for sale] include
real estate pages for Tuscany, United Kingdom, and New
Zealand.
These pages are likely to be irrelevant to, say,
somebody interested in San Francisco real estate who types
such an underspecified query.
Deciding whether a query posed by a regular search engine
user should produce mostly local or global pages is an important
and challenging problem, because queries are often
ambiguous or underspecify the information they are after,
as in the examples above. By identifying that, say, query
[wildflowers] is likely after "global" information, a search
engine could rank the results for this query so that state-specific
pages do not appear among the top matches. By
identifying that, say, query [houses for sale] is likely after
"local" information, a search engine could filter out pages
whose geographical locality is not appropriate for the user
who issued the query. Note that deciding which location
is of interest to a user who wrote an underspecified query
such as [houses for sale] is an orthogonal, important issue
that we do not address in this paper. Our focus is on identifying
that such a query is after "local" pages in nature,
and should therefore be treated differently by a search engine
than queries that are after "global" pages. By knowing
that a user query is after local information, a search engine
might choose to privilege pages whose geographical locality
coincides with that of the user's or, alternatively, attempt
to obtain further input from the user on what location is of
interest.
In this paper, we first define how to categorize user queries
according to their (often implicit) geographical locality. We
then introduce several alternatives for automatically and efficiently
classifying queries according to their locality, using
a variety of state-of-the-art machine learning tools. We report
a thorough evaluation of our classifiers using a large
sample of queries from a real web search engine query log.
Finally, we discuss how our query categorization approach
can help improve query result quality. The specific contributions
of this paper are as follows:
A discussion on how to categorize user queries according
to their geographical locality, based on a careful
analysis of a large query log from the Excite web site
(Section 3).
A feature representation for queries; we derive the feature
representation of a query from the results produced
for the query by a web search engine such as
Google (Section 4.1).
A variety of automatic query classification strategies
that use our feature representation for queries (Section
4.2).
A large-scale experimental evaluation of our strategies
over real search engine queries (Section 5).
Preliminary query reformulation and page re-ranking
strategies that exploit our query classification techniques
to improve query result quality (Section 6).
RELATED WORK
Traditional information-retrieval research has studied how
to best answer keyword-based queries over collections of text
documents [18].
These collections are typically assumed
to be relatively uniform in terms of, say, their quality and
scope. With the advent of the web, researchers are studying
other "dimensions" to the data that help separate useful resources
from less-useful ones in an extremely heterogeneous
environment like the web. Notably, the Google search engine
[4] and the HITS algorithm [7, 13] estimate the "impor-tance"
of web pages by analyzing the hyperlinks that point
to them, thus capturing an additional dimension to the web
data, namely how important or authoritative the pages are.
Ding et al. [9] extract yet another crucial dimension of the
web data, namely the geographical scope of web pages. For
example, the Stanford Daily newspaper has a geographical
scope that consists of the city of Palo Alto (where Stanford
University is located), while the New York Times newspaper
has a geographical scope that includes the entire U.S.
To compute the geographical scope of a web page, Ding et
al. propose two complementary strategies: a technique based
on the geographical distribution of HTML links to the page,
and a technique based on the distribution of geographical
references in the text of the page. Ding et al. report on
a search-engine prototype that simply filters out from the
results for a user query any pages not in the geographical
scope of the user. This technique does not attempt to determine
whether a query is best answered with "global" or
"local" pages, which is the focus of our paper. Ding et al.
built on the work by Buyukkokten et al. [6], who discussed
how to map a web site (e.g., http://www-db.stanford.edu)
to a geographical location (e.g., Palo Alto) and presented a
tool to display the geographical origin of the HTML links
to a given web page. This tool then helps visualize the geographical
scope of web pages [6].
A few commercial web sites manually classify web resources
by their location, or keep directory information that
lists where each company or web site is located. The North-ernLight
search engine
1
extracts addresses from web pages,
letting users narrow their searches to specific geographical
regions (e.g., to pages "originated" within a five-mile radius
of a given zip code). Users benefit from this information
because they can further filter their query results. McCurley
[14] presented a variety of approaches for recognizing
geographical references on web pages, together with a nav-igational
tool to browse pages by geographical proximity
and their spatial context. (Please refer to [16] for additional
references.) None of these techniques addresses our focus
problem in this paper: automatically determining the geographical
locality associated with a given, unmodified search
engine query.
DEFINING GEOGRAPHICAL LOCALITY
As discussed above, queries posed to a web search engine
can be regarded as local, if their best matches are likely to
be "local" pages, or as global, if their best matches are likely
to be "global" pages. In an attempt to make this distinction
more concrete, we now discuss several examples of local and
global queries.
Global queries often do not include a location name, as
is the case for query [Perl scripting]. A user issuing this
query is probably after tutorials about the Perl language,
and hence pages on the topic with a restricted geographi-1
http://www.northernlight.com/
326
cal scope are less desirable than global pages. Other global
queries do not mention a location explicitly either, but are
topically associated with one particular location. An example
of such a query is [Elgin marbles], which is topically associated
with the city of Athens. We consider these queries
as global, since their best matches are broad, global pages,
not localized pages with a limited geographical scope. In-terestingly
, global queries sometimes do include a location
name. For example, a query might be just a location name
(e.g., [Galapagos Islands]) or a request for concrete information
about a location (e.g., [Boston area codes]). General
resources about the location (e.g., tourist guides) are
arguably to be preferred for such queries, which are hence
regarded as global. Other global queries include locations
that are strongly associated topic-wise with the rest of the
query. Query [Woody Allen NYC] is an example of such a
query. The location mentioned in this query (i.e., "NYC,"
for "New York City") is not used to restrict query results to
pages of interest to New York residents, but rather expresses
a topic specification. Query [Ansel Adams Yosemite] is another
example: photographer Ansel Adams took a famous
series of photographs in Yosemite.
Local queries often include a location name, as is the case
for query [Wisconsin Christmas tree producers association].
The location mentioned in this query (i.e., "Wisconsin") is
used to "localize" the query results. Query [houses for sale
New York City] is a related example. Other local queries
do not include a location name, but still implicitly request
"localized" results. Query [houses for sale] is an example of
such a query. These queries tend to be underspecified, but
are still asked by (presumably naive) search engine users.
We conducted a thorough examination of a large number
(over 1,200) of real search engine queries. Most queries
that we encountered can be cleanly categorized as being either
global or local. However, other queries are inherently
ambiguous, and their correct category is impossible to determine
without further information on the user intent behind
them. For example, query [New York pizza] could be con-strued
as a local query if it is, say, after pizza delivery web
sites for the New York area. In contrast, the same query
could be regarded as a global query if the user who issues it
wants to learn about the characteristics of New York-style
pizza.
USING CLASSIFIERS TO DETERMINE LOCALITY
We earlier established that queries are associated with local
or global status, which influences the kind of results that
are desirable. Since current search engines do not directly
take into account geographical information, for certain types
of queries they produce a large number of on-topic but un-wanted
results, as in the [houses for sale] example discussed
earlier. In this section, we discuss automatic methods that
can determine, given a query, whether the query is a local
or global one. To build the two-class classifier, we experimented
with several state-of-the-art classification techniques
, using widely available implementations for each. We
describe below the features used in the classification, how we
extract them from web pages, and the classifiers with which
we experimented.
4.1
Classification Features
Web queries, which we treat in this paper as ordered bags
of words with no other structure, are typically fairly short.
In the collection of 2,477,283 real queries that we used in
our experiments (Section 5.1), 84.9% were five words long
or shorter. Because few words are available per query, basing
the classification directly on the words in the query may
lead to severe sparse data problems. Even more importantly,
some of the characteristics that make a query local or global
are not directly observable in the query itself, but rather in
the results returned. For example, a query that returns results
that contain few references to geographical locations is
likely to be global, while a query that returns results spread
uniformly over many locations without including a significant
percentage of results with no locations is likely to be
local.
For these reasons, we base our classification on a sample
of results actually returned for a given query rather than the
words in the query itself. By observing distributional characteristics
in the unmodified results, the classifier can infer
the type of the query (global or local) so that the results can
be appropriately filtered or re-ordered, or the query modified
. In a way the approach is similar in spirit to query
expansion techniques that rely on pseudo-relevance feedback
[5].
In our experiments, we use Google (via the Google
API
2
) to obtain the top 50 web pages that match the query.
For simplicity, we limited our search to HTML pages, skipping
over non-HTML documents. We chose Google because
it represents state-of-the-art web search technology and offers
a published interface for submitting large numbers of
queries.
We represent the web pages returned by Google as text
documents. This conversion is achieved by using the
lynx
HTML browser with the -dump option. We base our classification
features on measures of frequency and dispersion of
location names in these text files. For this purpose, we have
constructed a database of 1,605 location names by concatenating
lists of all country names
3
, of the capitals of these
countries
4
, of the fifty U.S. states, and of all cities in the
United States with more than 25,000 people
5
. We then compare
the words in each text document with the database of
location names, and output any matching entries and their
count per document. This matching is case insensitive, because
we found capitalization of location names in web pages
to be erratic. Note that we do not attempt to disambiguate
words that match location names but also have other senses
(e.g., "China"), as this is a hard problem in natural language
analysis; instead, we count such words as locations. An alternative
approach that would detect and disambiguate location
names would be to use a named-entity tagger. We
experimented with a well-known third-party named-entity
tagger, but we encountered a very high error rate because
of the noise often introduced in web pages.
Our classification features combine these location counts
in various ways. For each query, we measure the average
2
http://www.google.com/apis
3
Obtained from the United Nations, http://www.un.org/
Overview/unmember.html.
4
Obtained from the CIA World Factbook, http://www.
capitals.com/.
5
Obtained from the U.S. Census Bureau (2000 census
figures), http://www.census.gov/prod/2002pubs/00ccdb/
cc00_tabC1.pdf.
327
(per returned web page) number of location words in the retrieved
results. We count the average frequency of location
words at different levels of detail (country, state, city), as
well as the average of the aggregate total for all locations.
We obtain these frequencies for both the total count (tokens)
and the unique location words in each page (types), as it is
possible that a few locations would be repeated many times
across the results, indicating a global query, or that many locations
would be repeated few times each, indicating a local
query. We also consider the total number of unique locations
across all the returned documents taken together, divided by
the number of retrieved documents. For the average token
frequencies of city, state, and country locations we also calculate
the minimum and maximum across the set of returned
web pages. To account for the hierarchical nature of location
information, we calculate an alternative frequency for states
where we include in the count for each state the counts for
all cities from that state that were found in that text; this
allows us to group together location information for cities
in the same state. We also include some distributional measures
, namely the fraction of the pages that include at least
one location of any kind, the percentage of words that match
locations across all pages, and the standard deviation of the
total per page location count. Finally, we add to our list
of features the total number of words in all of the returned
documents, to explore any effect the local/global distinction
may have on the size of the returned documents. These calculations
provide for 20 distinct features that are passed on
to the classifier.
6
The core data needed to produce these
20 query features (i.e., the locations mentioned in each web
page) could be efficiently computed by a search engine such
as Google at page-indexing time. Then, the final feature
computation could be quickly performed at query time using
this core data.
4.2
Classification Methods
We initially trained a classifier using
Ripper [8], which
constructs a rule-based classifier in an incremental manner.
The algorithm creates an initial set of very specific rules
based on the data, similar to the way in which decision trees
are generated. The rules are then pruned iteratively to eliminate
the ones that do not seem to be valid for a large enough
subset of the training data, so as to prevent overfitting.
Although
Ripper provides a robust classifier with high
accuracy and transparency (a human can easily examine
the produced rules), it outputs binary "local""global" decisions
. In many cases, it is preferable to obtain a measure
of confidence in the result or an estimate of the probability
that the assigned class is the correct one. To add this capability
to our classifier, we experimented with logistic regression
[19]. Logistic, or log-linear, regression models a binary
output variable (the class) as a function of a weighted sum
of several input variables (the classification features). Con-ceptually
, a linear predictor
is first fitted over the training
data in a manner identical to regular regression analysis,
i.e.,
= w
0
+
k
i=1
w
i
F
i
where
F
i
is the
i-th feature and w
i
is the weight assigned to
6
Studying the effect on classification accuracy of a richer
feature set (e.g., including as well all words on the result
pages) is the subject of interesting future work.
that feature during training. Subsequently,
is transformed
to the final response,
C, via the logistic transformation
C =
e
1 +
e
which guarantees that
C is between 0 and 1. Each of the
endpoints of the interval (0
, 1) is associated with one of the
classes, and
C gives the probability that the correct class is
the one associated with "1". In practice, the calculations are
not performed as a separate regression and transformation,
but rather as a series of successive regressions of transformed
variables via the iterative reweighted least squares algorithm
[1].
7
We used the implementation of log-linear regression
provided in the R statistical package.
8
Another desideratum for our classifier is its ability to support
different costs for the two possible kinds of errors (misclassifying
local queries versus misclassifying global queries).
Which kind of error is the most important may vary for
different settings; for our search modification application,
we consider the misclassification of global queries as local
ones a more serious error. This is because during our subsequent
modification of the returned results (Section 6), we
reorder the results for some of the queries that we consider
global, but we modify the original queries for some of the
queries classified as local, returning potentially very different
results. Consequently, the results can change more significantly
for a query classified as local, and the potential for
error is higher when a global query is labeled local than the
other way around.
Both
Ripper and log-linear regression can incorporate different
costs for each type of error. We experimented with
a third classification approach that also supports this feature
, Support Vector Machines (SVMs) [2], which have been
found quite effective for text matching problems [11]. SVM
classifiers conceptually convert the original measurements of
the features in the data to points in a high-dimensional space
that facilitates the separation between the two classes more
than the original representation. While the transformation
between the original and the high-dimensional space may be
complex, it needs not to be carried out explicitly. Instead,
it is sufficient to calculate a kernel function that only involves
dot products between the transformed data points,
and can be calculated directly in the original feature space.
We report experiments with two of the most common kernel
functions: a linear kernel,
K(x, y) = x y + 1
and a Gaussian (radial basis function) kernel,
K(x, y) = e
- x-y
2
/2
2
where
is a parameter (representing the standard deviation
of the underlying distribution). This latter kernel has been
recommended for text matching tasks [10]. Regardless of the
choice of kernel, determining the optimal classifier is equivalent
to determining the hyperplane that maximizes the total
distance between itself and representative transformed data
points (the support vectors).
Finding the optimal classifier
therefore becomes a constrained quadratic optimization
7
This is because the modeled distribution is binomial rather
than normal, and hence the variance depends on the mean-see
[19] for the technical details.
8
http://www.r-project.org/
328
Set
Original
number of
queries
Number of
appropriate
queries
Global
Local
Training
595
439
368
(83.8%)
71
(16.2%)
Development
199
148
125
(84.5%)
23
(15.5%)
Test
497
379
334
(88.1%)
45
(11.9%)
Table 1: Distribution of global and local queries in our training, development, and test sets.
problem. In our experiments, we use the SVM-Light implementation
9
[12].
In many binary classification tasks, one of the two classes
predominates, and thus trained classifiers tend to favor that
class in the absence of strong evidence to the contrary. This
certainly applies to our task; as we show in Section 5.1,
8389% of web queries are global. Weiss and Provost [21]
showed that this imbalance can lead to inferior classifier
performance on the test set, and that the problem can be
addressed through oversampling of the rarer class in the
training data. Their method examines different oversampling
rates by constructing artificial training sets where the
smaller class is randomly oversampled to achieve a specific
ratio between samples from the two classes. For each such
sampling ratio, a classifier is trained, which assigns a score
to each object indicating strength of evidence for one of the
classes. By fixing a specific strength threshold, we divide
the classifier output into the two classes. Further, by varying
this threshold
10
we can obtain an error-rate curve for
each class as a function of the threshold. The entire process
results in a Receiver-Operator Characteristic (ROC) curve
[3] for each sampling ratio. Specific points on the curve that
optimize the desired combination of error rates can then be
selected, and the performance of the classification method
across the different thresholds can be measured from the
area between the curve and the x-axis. Weiss and Provost
use the C4.5 classifier [17], a decision tree classifier with
additional pruning of nodes to avoid overfitting. We use a
software package provided by them (and consequently also
the C4.5 algorithm) to explore the effect that different ratios
of global to local queries during training have on classifier
performance.
EXPERIMENTAL RESULTS
We now describe the data (Section 5.1) and metrics (Section
5.2) that we use for the experimental evaluation of the
query classifiers (Section 5.3).
5.1
Data
For the experiments reported in this paper, we used a sample
of real queries submitted to the Excite search engine.
11
We had access to a portion of the December 1999 query log of
Excite, containing 2,477,283 queries. We randomly selected
initial sets of queries for training, development (tuning the
parameters of the classifiers we train), and testing purposes
by selecting each of these queries for inclusion in each set
with a constant (very small) probability. These probabilities
were set to 400/2,477,283, 400/2,477,283, and 500/2,477,283
9
Available from http://svmlight.joachims.org/.
10
Setting the threshold to each extreme assigns all or none
of the data points to that category.
11
http://www.excite.com/
for the three sets, respectively. Subsequently we combined
the training and development set, and reassigned the queries
in the combined set so that three-fourths were placed in the
training set and one-fourth in the development; we kept the
test set separate. This process generated 595, 199, and 497
queries in the initial versions of the training, development,
and test sets. We further eliminated queries that passed any
of the following tests:
Upon examination, they appeared likely to produce
results with explicit sexual content.
When supplied to Google--and after filtering out any
non-HTML results and any broken links--the queries
produced fewer than 40 files. This constraint is meant
to ensure that we are not including in our experimental
data queries that contain misspellings or deal with
extremely esoteric subjects, for which not enough material
for determining locality would be available.
They had already been included in an earlier set (we
constructed first the training set, then the development
set, and finally the test set). Since multiple people
may issue the same query, duplicates can be found
in the log. Although our algorithms take no special advantage
of duplicates, we eliminated them to avoid any
bias. Taking into account variations of upper/lower
case and spacing between queries (but not word order
), this constraint removed 6 queries from the test
set.
These filtering steps left us with 439 queries in the training
set, 148 queries in the development set, and 379 queries in
the test set.
We then classified the queries using the criteria laid out
in Section 3. Table 1 shows the size of the three sets before
and after filtering, and the distribution of local and global
queries in each set. We observe that, in general, most queries
(8389%) tend to be global.
5.2
Evaluation Metrics
We consider a number of evaluation metrics to rate the
performance of the various classifiers and their configurations
. Since a large majority of the queries are global (85.6%
in the training, development, and test sets combined), overall
classification accuracy (i.e., the percentage of correct
classification decisions) may not be the most appropriate
measure. This is because a baseline method that always
suggests the most populous class ("global") will have an accuracy
equal to the proportion of global queries in the evaluated
set. Yet such a classifier will offer no improvement
during search since it provides no new information. The situation
is analogous to applications in information retrieval
or medicine where very few of the samples should be labeled
positive (e.g., in a test for a disease that affects only 0.1%
329
of patients). While we do not want overall accuracy to decrease
from the baseline (at least not significantly), we will
utilize measures that capture the classifier's improved ability
to detect the rarer class relative to the baseline method.
Two standard such metrics are precision and recall for
the local queries. Precision is the ratio of the number of
items correctly assigned to the class divided by the total
number of items assigned to the class. Recall is the ratio
of the number of items correctly assigned to a class as compared
with the total number of items in the class. Note that
the baseline method achieves precision of 100% but recall of
0%. For a given classifier with adjustable parameters, often
precision can be increased at the expense of recall, and vice
versa; therefore we also compute the F-measure [20] (with
equal weights) to combine precision and recall into a single
number,
F-measure = 2 Precision Recall
Precision + Recall
Finally, we argued earlier that one kind of misclassification
errors may be assigned a higher cost. We can then calculate
the average cost [15],
Average cost =
X{
Global
,
Local
}
Cost(
X) Rate(X)
where Cost(
X) is the cost of wrong X classifications and
Rate(
X) is the rate of wrong X classifications. Average cost
is the measure to minimize from a decision theory perspective
. The rate of wrong classifications for a class is equal
to the number of data points that have been misclassified
into that class divided by the total number of classification
decisions, and the costs for each misclassification error are
predetermined parameters. If both costs are set to 1, then
the average cost becomes equal to the total error rate, i.e.,
one minus accuracy. In our experiments, we report the average
cost considering the mislabeling of global queries as
local twice as important as the mislabeling of local queries,
for the reasons explained in the previous section.
5.3
Results
We trained the classifiers of Section 4.2 on the 439 queries
in our training set.
Ripper and the regression model were
trained on that training set without modification. For C4.5
and SVMs, we explored the effect that different proportions
of local queries in the training set have on overall performance
. For that purpose, we used our development set to
evaluate the performance effects of different local query proportions
, and select the optimal classifier within each family.
For the C4.5-based classifier, we used the C4.4 software
provided by Foster Provost and Claudia Perlich to explore
the effect of different proportions of local and global queries.
We created training sets by randomly oversampling or un-dersampling
the minority (local) class as needed, in increments
of 10%.
For any given proportion of local queries
between 10% and 90%, we started from our training set,
modified it according to the above sampling method to have
the desired proportion of local queries, trained the corresponding
C4.5 classifier, and evaluated its performance on
our development set. The natural proportion of the local
class in the unmodified training data is also included as one
of the proportions used to build and evaluate a classifier. In
this manner, we obtain curves for the various metrics as the
proportion of local queries varies (Figure 1). We observe
Figure 1:
Evaluation metrics for C4.5 classifiers
trained on different proportions of local queries.
that the highest value for precision and F-measure, and the
lowest value for the average cost, are obtained when the
classifier is trained with a significantly amplified proportion
of local queries (80%). Further, running C4.5 with 80% local
queries also produced the largest area under the ROC
curve obtained when different precision/recall tradeoffs in
the development set are explored. On the basis of this information
, we selected the proportion of 80% local queries
as the optimal configuration for C4.5. We refer to that configuration
as C4.5(80), and this is the version of C4.5 that
we evaluated on the test set.
Using our own implementation for constructing extended
training sets with a given proportion of local queries, we
performed similar experiments for Support Vector Machines
with linear and Gaussian kernels. For these classifiers, we
also experimented with versions trained with equal error
costs for the two kinds of classification errors, and with versions
where, during training, a false local assignment counts
for twice as much as a false global assignment. We found
that the optimal proportion of local queries is closer to the
natural proportion with SVMs compared to C4.5 classifiers;
the proportions chosen from our development set were 50%
for the linear SVM classifier with equal error costs, 30% for
the linear SVM classifier with unequal error costs, 30% for
the Gaussian SVM classifier with equal error costs, and 20%
for the Gaussian SVM classifier with unequal error costs.
We denote the optimal classifiers from these four families
as SVM-Linear-E(50), SVM-Linear-U(30), SVM-Gaussian-E
(30), and SVM-Gaussian-U(20), respectively.
Figure 2
shows the curve obtained for the SVM-Gaussian-U family
of classifiers.
Having determined the best value for the proportion of
local queries for C4.5 and SVM-based classifiers, we evaluate
these classifiers, as well as the classifiers obtained from
Ripper and log-linear regression, on our test set.
12
Table 2
shows the values of the evaluation metrics obtained on the
unseen test set. The classifier using a linear kernel SVM
with unequal error costs achieves the highest F-measure,
12
We also experimented with variable error costs for the
Ripper
classifier, using the same 2:1 error cost correspondence,
but the resulting classifier was identical to the
Ripper classifier
obtained with equal error costs.
330
Classifier
Recall
Precision
F-Measure
Average Cost
Accuracy
Ripper
53.33%
47.06%
50.00%
0.1979
87.34%
Log-linear Regression
37.78%
58.62%
45.95%
0.1372
89.45%
C4.5(80)
40.00%
32.73%
36.00%
0.2665
83.11%
SVM-Linear-E(50)
48.89%
48.89%
48.89%
0.1821
87.86%
SVM-Linear-U(30)
48.89%
53.66%
51.16%
0.1609
88.92%
SVM-Gaussian-E(30)
37.78%
53.13%
44.16%
0.1530
88.65%
SVM-Gaussian-U(20)
37.78%
53.13%
44.16%
0.1530
88.65%
Baseline (always global)
0.00%
100.00%
0.00%
0.1187
88.13%
Table 2: Evaluation metrics on the test set of selected classifiers optimized over the development set.
Figure 2:
Evaluation metrics for Support Vector
Machines with Gaussian kernel and false local assignments
weighted twice as much as false global assignments
, trained on different proportions of local
queries.
while the log-linear classifier achieves the lowest average
classification cost. As expected, the SVM classifiers that
were trained with unequal error costs achieve the same or
lower average cost (which also utilizes the same unequal error
costs) compared to their counterparts trained with equal
error costs. Overall,
Ripper, log-linear regression, and the
two SVM classifiers with linear kernels achieve the highest
performance, with small differences between them. They
are followed by the two SVM classifiers with a Gaussian
kernel function, while C4.5 trails significantly behind the
other classifiers.
The features used for classification vary considerably from
classifier to classifier.
13
Ripper achieves one of the best classification
performances using only one simple rule, based
only on the average number of city locations per returned
web page: if that number exceeds a threshold, the query is
classified as local, otherwise as global. On the other hand,
the C4.5 and SVM classifiers utilize all or almost all the
features. The log-linear regression classifier falls in-between
these two extremes, and primarily utilizes the average numbers
of unique city, state, and country names per retrieved
page, as well as the total number of unique locations per
page (4 features).
For concreteness, and to conclude our discussion, Table 3
13
Most classifiers automatically ignore some of the provided
features, to avoid overfitting.
shows the performance of our classifiers on a few representative
examples of local and global queries.
IMPROVING SEARCH RESULTS
The core of this paper is on classifying queries as either
local or global. In this section, we present preliminary ideas
on how to exploit this classification to improve the quality
of the query results. Further exploration of these and other
directions is the subject of interesting future work.
Consider a query that has been classified as local using
the techniques of Section 4.
By definition, this query is
best answered with "localized" pages. We can easily determine
if the query includes any location name by using the
dictionary-based approach of Section 4.1. If no locations are
present in the query (e.g., as in query [houses for sale]), in
the absence of further information we can attempt to "localize"
the query results to the geographical area of the user
issuing the query, for which we can rely on registration information
provided by the user, for example. Consequently,
we can simply expand the query by appending the user's
location to it, to turn, say, the query [houses for sale] into
[houses for sale San Francisco] for a San Francisco resident.
Alternatively, a search engine might attempt to obtain additional
information from the user to further localize the query
as appropriate. For example, the query [houses for sale] can
then be transformed into [houses for sale New York City] for
a San Francisco resident who is moving to New York City. In
either case, the expanded query will tend to produce much
more focused and localized results than the original query
does. As of the writing of this paper, all of the top-10 results
returned by Google for query [houses for sale San Francisco]
are results of relevance to a person interested in Bay Area
real estate. In contrast, most of the results for the original
query, [houses for sale], are irrelevant to such a person, as
discussed in the Introduction. An alternative, more expensive
strategy for handling these queries is to compute and
exploit the geographical scope of web pages as defined in [9].
Then, pages with a geographical scope that includes the location
of the user issuing the query would be preferred over
other pages. In contrast, a local query in which locations are
mentioned is likely to return pages with the right locality,
making any further modification of the query or reranking
of the results unnecessary.
Consider now a query that has been classified as global
using the techniques of Section 4. By definition, this query
is best answered with "broad" pages. Rather than attempting
to modify a global query so that it returns "broad"
pages, we can follow a result reranking strategy to privilege
these pages over more localized ones. One possible reranking
strategy is to reorder the results from, say, Google for
331
Class
Query
Classifier
Ripper Regression C4.5(80) SVM-LE SVM-LU SVM-GE SVM-GU
Global
[Perl scripting]
Global
-0.9381
Global
-1.9163
-1.7882
-1.0627
-1.0590
[world news]
Global
-0.8306
Local
-0.5183
-0.3114
-0.4166
-0.1440
[wildflowers]
Global
-0.5421
Global
-0.7267
-0.8082
-0.8931
-0.8144
[Elgin marbles]
Local
0.4690
Local
0.6426
0.6654
0.0378
0.1016
[Galapagos Islands]
Global
-0.7941
Global
-1.2834
-1.1998
-0.9826
-0.8575
[Boston zip code]
Local
-0.0243
Local
0.6874
0.6152
0.0408
0.0797
[Woody Allen
NYC]
Global
-0.2226
Global
-0.3253
-0.3541
-0.6182
-0.5272
Local
[houses for sale]
Global
-0.6759
Global
-1.0769
-1.0962
-0.9242
-0.8516
[Volkswagen clubs]
Local
-0.0933
Global
1.0844
0.7917
0.0562
0.0837
[Wisconsin
Christmas tree
producers
association]
Global
0.1927
Local
-0.1667
-0.4421
-0.4461
-0.3582
[New York style
pizza delivery]
Global
-0.0938
Global
-0.5945
-0.6809
-0.5857
-0.4824
Table 3: Classification assignments made by different classifiers on several example queries. SVM-LE, SVM-LU
, SVM-GE, and SVM-GU stand for classifiers SVM-Linear-E(50), SVM-Linear-U(30), SVM-Gaussian-E
(30), and SVM-Gaussian-U(20), respectively. For regression and SVM classifiers, positive numbers indicate
assignment to the local class, and negative numbers indicate assignment to the global class; the absolute
magnitude of the numbers increases as the classifier's confidence in its decision increases.
(We linearly
transformed the regression output from the (0, 1) to the (
-1, 1) range.) The scale of the numbers is
consistent across queries and between all SVM classifiers, but not directly comparable between regression
classifiers (bound between
-1 and 1) and SVM classifiers (unbounded).
the unmodified query based on the geographical scope of the
pages as defined in [9]. Thus pages with a broad geographical
scope (e.g., covering the entire United States) would
prevail over other pages with a narrower scope. A less expensive
alternative is to classify the result pages as local
or global following a procedure similar to that of Section 4
for queries.
Specifically, we implemented this alternative
by training C4.5
Rules, a rule-based version of the C4.5
decision-tree classifier, with a collection of 140 web pages
categorized in the Yahoo! directory. Pages classified under
individual states in the "Regional" portion of the directory
were regarded as local, while pages under general categories
were regarded as global. The feature representation for the
pages was analogous to that for the queries in Section 4.1
but restricted to features that are meaningful over individual
pages (e.g., total number of locations on a page), as opposed
to over a collection of pages (e.g., minimum number of locations
per page in the top-50 result pages for a query). At
query time, we reorder the results so as to privilege global
pages over local ones. This is based on the locality classification
of the pages, which can be precomputed off-line since
it is query-independent or performed on the fly as we do in
our prototype implementation. This procedure is efficient,
and produced promising initial results for a handful of global
queries (e.g., [wildflowers]) that we tried.
Our preliminary approach to query modification is therefore
as follows: Given a query specified by the user, we supply
first the unmodified query to the search engine and collect
the top 50 results.
We extract location names from
these results
14
, and calculate the features of Section 4.1.
Using one of the best performing classifiers of Section 4, we
determine if the query is global or local. If it is local and
14
As noted earlier, these names could be cached along with
each web page at the time of indexing, to increase efficiency.
contains at least one location name, nothing is done--the
results returned from the unmodified query are presented
to the user. If the query is local and contains no location,
we add the user's location (or, alternatively, request further
information from the user, as discussed), reissue the query
and present the results. Finally, if the query is global, we
calculate the scope of each retrieved web page using part of
the location features computed earlier and the C4.5
Rules
classifier, and rerank the results so that more global pages
are higher in the list shown to the user. We have built a prototype
implementation of this algorithm, using the classifier
obtained from
Ripper (because of the relative simplicity of
its rules) for query classification, and Google as the search
engine.
CONCLUSION
We have described an attribute of queries, locality, that-to
the best of our knowledge--has not been explored before
either in theoretical work or in practical search engines but
can significantly affect the appropriateness of the results
returned to the user. We defined a categorization scheme
for queries based on their geographical locality, and showed
how queries can be represented for purposes of determining
locality by features based on location names found in the
results they produce. Using these features, automatic classifiers
for determining locality can be built. We explored
several state-of-the-art classification approaches, and evaluated
their performance on a large set of actual queries. The
empirical results indicated that for many queries locality can
be determined effectively.
The bulk of the paper discussed methods for classifying
queries according to locality, and empirically established
that this is desirable and feasible for many queries.
We
also presented some first thoughts on possible query refor-332
mulation and result reranking strategies that utilize locality
information to actually improve the results the user sees.
Although our strategies for query modification and result
reranking are preliminary, they illustrate a promising family
of approaches that we plan to investigate in the future
so that we can exploit the classification of queries based on
their geographical locality in order to improve search result
quality.
Acknowledgments
This material is based upon work supported in part by the
National Science Foundation under Grants No. IIS-97-33880
and IIS-98-17434. We are grateful to Claudia Perlich and
Foster Provost for providing us with their adaptation of the
C4.5 classifier that we used in our experiments. Also, we
would like to thank Thorsten Joachims for answering our
questions on SVM-Light, and David Parkes for his helpful
comments and insight.
REFERENCES
[1] D. M. Bates and D. G. Watts. Nonlinear Regression
Analysis and its Applications. Wiley, New York, 1988.
[2] B. E. Boser, I. M. Guyon, and V. Vapnik. A training
algorithm for optimal margin classifiers. In
Proceedings of the Fifth Annual Workshop on
Computational Learning Theory, Pittsburgh, 1992.
[3] A. Bradley. The use of the area under the ROC curve
in the evaluation of machine learning algorithms.
Pattern Recognition, 30(7):11451159, 1998.
[4] S. Brin and L. Page. The anatomy of a large-scale
hypertextual web search engine. In Proceedings of the
Seventh International World Wide Web Conference
(WWW7), Apr. 1998.
[5] C. Buckley, J. Allan, G. Salton, and A. Singhal.
Automatic query expansion using SMART: TREC 3.
In Proceedings of the Third Text REtrieval Conference
(TREC-3), pages 6980, April 1995. NIST Special
Publication 500-225.
[6] O. Buyukkokten, J. Cho, H. Garcia-Molina,
L. Gravano, and N. Shivakumar. Exploiting
geographical location information of web pages. In
Proceedings of the ACM SIGMOD Workshop on the
Web and Databases (WebDB'99), June 1999.
[7] S. Chakrabarti, B. Dom, P. Raghavan,
S. Rajagopalan, D. Gibson, and J. Kleinberg.
Automatic resource compilation by analyzing
hyperlink structure and associated text. In
Proceedings of the Seventh International World Wide
Web Conference (WWW7), Apr. 1998.
[8] W. W. Cohen. Learning trees and rules with
set-valued functions. In Proceedings of the Thirteenth
International Joint Conference on Artificial
Intelligence, 1996.
[9] J. Ding, L. Gravano, and N. Shivakumar. Computing
geographical scopes of web resources. In Proceedings of
the Twenty-sixth International Conference on Very
Large Databases (VLDB'00), 2000.
[10] G. W. Flake, E. J. Glover, S. Lawrence, and C. L.
Giles. Extracting query modifications from nonlinear
SVMs. In Proceedings of the Eleventh International
World-Wide Web Conference, Dec. 2002.
[11] M. A. Hearst. Trends and controversies: Support
vector machines. IEEE Intelligent Systems,
13(4):1828, July 1998.
[12] T. Joachims. Estimating the generalization of
performance of an SVM efficiently. In Proceedings of
the Fourteenth International Conference on Machine
Learning, 2000.
[13] J. Kleinberg. Authoritative sources in a hyperlinked
environment. In Proceedings of the Ninth Annual
ACM-SIAM Symposium on Discrete Algorithms, pages
668677, Jan. 1998.
[14] K. S. McCurley. Geospatial mapping and navigation
of the web. In Proceedings of the Tenth International
World Wide Web Conference (WWW10), May 2001.
[15] M. Pazzani, C. Merz, P. Murphy, K. Ali, T. Hume,
and C. Brunk. Reducing misclassification costs. In
Proceedings of the Eleventh International Conference
on Machine Learning, Sept. 1997.
[16] R. Purves, A. Ruas, M. Sanderson, M. Sester, M. van
Kreveld, and R. Weibel. Spatial information retrieval
and geographical ontologies: An overview of the
SPIRIT project. In Proceedings of the 25th ACM
International Conference on Research and Development
in Information Retrieval (SIGIR'02), 2002.
[17] R. J. Quinlan. C4.5: Programs for Machine Learning.
Morgan Kaufman, 1993.
[18] G. Salton. Automatic Text Processing: The
transformation, analysis, and retrieval of information
by computer. Addison-Wesley, 1989.
[19] T. J. Santner and D. E. Duffy. The Statistical Analysis
of Discrete Data. Springer-Verlag, New York, 1989.
[20] C. J. van Rijsbergen. Information Retrieval.
Butterworths, London, 2nd edition, 1979.
[21] G. M. Weiss and F. Provost. The effect of class
distribution on classifier learning: An empirical study.
Technical Report ML-TR-44, Computer Science
Department, Rutgers University, Aug. 2001.
333 | geographical locality;categorization scheme;query modification;web search;query categorization / query classification;web queries;search engines;global page;local page;information retrieval;search engine;query classification |
113 | Information Revelation and Privacy in Online Social Networks | Participation in social networking sites has dramatically increased in recent years. Services such as Friendster, Tribe, or the Facebook allow millions of individuals to create online profiles and share personal information with vast networks of friends - and, often, unknown numbers of strangers. In this paper we study patterns of information revelation in online social networks and their privacy implications. We analyze the online behavior of more than 4,000 Carnegie Mellon University students who have joined a popular social networking site catered to colleges. We evaluate the amount of information they disclose and study their usage of the site's privacy settings. We highlight potential attacks on various aspects of their privacy, and we show that only a minimal percentage of users changes the highly permeable privacy preferences. | EVOLUTION OF ONLINE NETWORKING
In recent years online social networking has moved from
niche phenomenon to mass adoption. Although the concept
dates back to the 1960s (with University of Illinois Plato
computer-based education tool, see [16]), viral growth and
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
WPES'05,
November 7, 2005, Alexandria, Virginia, USA.
Copyright 2005 ACM 1-59593-228-3/05/0011 ...
$
5.00.
commercial interest only arose well after the advent of the
Internet.
1
The rapid increase in participation in very recent
years has been accompanied by a progressive diversification
and sophistication of purposes and usage patterns across a
multitude of different sites. The Social Software Weblog
2
now groups hundreds of social networking sites in nine categories
, including business, common interests, dating, face-to
-face facilitation, friends, pets, and photos.
While boundaries are blurred, most online networking
sites share a core of features: through the site an individual
offers a "profile" - a representation of their sel[ves] (and,
often, of their own social networks) - to others to peruse,
with the intention of contacting or being contacted by others
, to meet new friends or dates (Friendster,
3
Orkut
4
), find
new jobs (LinkedIn
5
), receive or provide recommendations
(Tribe
6
), and much more.
It is not unusual for successful social networking sites to
experience periods of viral growth with participation expanding
at rates topping 20% a month. Liu and Maes estimate
in [18] that "well over a million self-descriptive personal
profiles are available across different web-based social
networks" in the United States, and Leonard, already in
2004, reported in [16] that world-wide "[s]even million people
have accounts on Friendster. [...] Two million are registered
to MySpace. A whopping 16 million are supposed to
have registered on Tickle for a chance to take a personality
test."
The success of these sites has attracted the attention of
the media (e.g., [23], [3], [16], [4], [26]) and researchers. The
latter have often built upon the existing literature on social
network theory (e.g., [20], [21], [11], [12], [32]) to discuss
its online incarnations. In particular, [7] discusses issues of
trust and intimacy in online networking; [9] and [8] focus
on participants' strategic representation of their selves to
others; and [18] focus on harvesting online social network
profiles to obtain a distributed recommender system.
In this paper, we focus on patterns of personal information
revelation and privacy implications associated with online
networking. Not only are the participation rates to online
1
One of the first networking sites, SixDegrees.com, was
launched in 1997 but shut down in 2000 after "struggling
to find a purpose for [its] concept" [5].
2
Http://www.socialsoftware.weblogsinc.com/ .
3
Http://www.friendster.com/ .
4
Http://www.orkut.com/ .
5
Http://www.linkedin.com/ .
6
Http://www.tribe.net/ .
71
social networking staggering among certain demographics;
so, also, are the amount and type of information participants
freely reveal. Category-based representations of a person's
broad interests are a recurrent feature across most networking
sites [18]. Such categories may include indications of a
person's literary or entertainment interests, as well as political
and sexual ones. In addition, personally identified
or identifiable data (as well as contact information) are often
provided, together with intimate portraits of a person's
social or inner life.
Such apparent openness to reveal personal information to
vast networks of loosely defined acquaintances and complete
strangers calls for attention. We investigate information revelation
behavior in online networking using actual field data
about the usage and the inferred privacy preferences of more
than 4,000 users of a site catered to college students, the
Facebook.
7
Our results provide a preliminary but detailed
picture of personal information revelation and privacy concerns
(or lack thereof) in the wild, rather than as discerned
through surveys and laboratory experiments.
The remainder of this paper is organized as follows. We
first elaborate on information revelation issues in online social
networking in Section 2. Next, we present the results
of our data gathering in Section 3. Then, we discuss their
implications in terms of users attitudes and privacy risks in
Section 4. Finally, we summarize our findings and conclude
in Section 5.
INFORMATION REVELATION AND ONLINE SOCIAL NETWORKING
While social networking sites share the basic purpose of
online interaction and communication, specific goals and
patterns of usage vary significantly across different services.
The most common model is based on the presentation of the
participant's profile and the visualization of her network of
relations to others - such is the case of Friendster. This
model can stretch towards different directions. In match-making
sites, like Match.com
8
or Nerve
9
and Salon
10
Personals
, the profile is critical and the network of relations
is absent. In diary/online journal sites like LiveJournal,
11
profiles become secondary, networks may or may not be visible
, while participants' online journal entries take a central
role. Online social networking thus can morph into online
classified in one direction and blogging in another.
Patterns of personal information revelation are, therefore,
quite variable.
First, the pretense of identifiability changes across different
types of sites.
The use of real names to (re)present
an account profile to the rest of the online community may
be encouraged (through technical specifications, registration
requirements, or social norms) in college websites like the
Facebook, that aspire to connect participants' profiles to
their public identities. The use of real names may be toler-ated
but filtered in dating/connecting sites like Friendster,
that create a thin shield of weak pseudonymity between the
public identity of a person and her online persona by making
only the first name of a participant visible to others,
7
Http://www.facebook.com/ .
8
Http://www.match.com/ .
9
Http://personals.nerve.com/ .
10
Http://personals.salon.com/ .
11
Http://www.livejournal.com/ .
and not her last name. Or, the use of real names and personal
contact information could be openly discouraged, as in
pseudonymous-based dating websites like Match.com, that
attempt to protect the public identity of a person by making
its linkage to the online persona more difficult. However,
notwithstanding the different approaches to identifiability,
most sites encourage the publication of personal and identifiable
personal photos (such as clear shots of a person's
face).
Second, the type of information revealed or elicited often
orbits around hobbies and interests, but can stride from
there in different directions. These include: semi-public information
such as current and previous schools and employers
(as in Friendster); private information such as drinking
and drug habits and sexual preferences and orientation (as
in Nerve Personals); and open-ended entries (as in LiveJournal
).
Third, visibility of information is highly variable. In certain
sites (especially the ostensibly pseudonymous ones) any
member may view any other member's profile. On weaker-pseudonym
sites, access to personal information may be limited
to participants that are part of the direct or extended
network of the profile owner. Such visibility tuning controls
become even more refined on sites which make no pretense
of pseudonymity, like the Facebook.
And yet, across different sites, anecdotal evidence suggests
that participants are happy to disclose as much information
as possible to as many people as possible. It is not unusual
to find profiles on sites like Friendster or Salon Personals
that list their owners' personal email addresses (or link to
their personal websites), in violation of the recommendation
or requirements of the hosting service itself. In the next sub-section
, we resort to the theory of social networks to frame
the analysis of such behavior, which we then investigate em-pirically
in Section 3.
2.1
Social Network Theory and Privacy
The relation between privacy and a person's social network
is multi-faceted. In certain occasions we want information
about ourselves to be known only by a small circle
of close friends, and not by strangers. In other instances,
we are willing to reveal personal information to anonymous
strangers, but not to those who know us better.
Social network theorists have discussed the relevance of
relations of different depth and strength in a person's social
network (see [11], [12]) and the importance of so-called
weak ties in the flow of information across different nodes
in a network. Network theory has also been used to explore
how distant nodes can get interconnected through relatively
few random ties (e.g., [20], [21], [32]).
The privacy relevance
of these arguments has recently been highlighted by
Strahilevitz in [27].
Strahilevitz has proposed applying formal social network
theory as a tool for aiding interpretation of privacy in legal
cases. He suggests basing conclusions regarding privacy "on
what the parties should have expected to follow the initial
disclosure of information by someone other than the defen-dant"
(op cit, p. 57). In other words, the consideration
of how information is expected to flow from node to node
in somebody's social network should also inform that person's
expectations for privacy of information revealed in the
network.
However, the application of social network theory to the
72
study of information revelation (and, implicitly, privacy choices)
in online social networks highlights significant differences between
the offline and the online scenarios.
First, offline social networks are made of ties that can only
be loosely categorized as weak or strong ties, but in reality
are extremely diverse in terms of how close and intimate a
subject perceives a relation to be. Online social networks,
on the other side, often reduce these nuanced connections
to simplistic binary relations: "Friend or not" [8]. Observing
online social networks, Danah Boyd notes that "there
is no way to determine what metric was used or what the
role or weight of the relationship is. While some people are
willing to indicate anyone as Friends, and others stick to a
conservative definition, most users tend to list anyone who
they know and do not actively dislike. This often means
that people are indicated as Friends even though the user
does not particularly know or trust the person" [8] (p. 2).
Second, while the number of strong ties that a person
may maintain on a social networking site may not be significantly
increased by online networking technology, Donath
and Boyd note that "the number of weak ties one can
form and maintain may be able to increase substantially,
because the type of communication that can be done more
cheaply and easily with new technology is well suited for
these ties" [9] (p. 80).
Third, while an offline social network may include up to
a dozen of intimate or significant ties and 1000 to 1700 "ac-quaintances"
or "interactions" (see [9] and [27]), an online
social networks can list hundreds of direct "friends" and include
hundreds of thousands of additional friends within just
three degrees of separation from a subject.
This implies online social networks are both vaster and
have more weaker ties, on average, than offline social networks
. In other words, thousands of users may be classified
as friends of friends of an individual and become able to
access her personal information, while, at the same time,
the threshold to qualify as friend on somebody's network
is low. This may make the online social network only an
imaginary (or, to borrow Anderson's terminology, an imagined
) community (see [2]). Hence, trust in and within online
social networks may be assigned differently and have a different
meaning than in their offline counterparts. Online
social networks are also more levelled, in that the same information
is provided to larger amounts of friends connected
to the subject through ties of different strength. And here
lies a paradox. While privacy may be considered conducive
to and necessary for intimacy (for [10], intimacy resides in
selectively revealing private information to certain individuals
, but not to others), trust may decrease within an online
social network. At the same time, a new form of intimacy
becomes widespread: the sharing of personal information
with large and potential unknown numbers of friends and
strangers altogether. The ability to meaningfully interact
with others is mildly augmented, while the ability of others
to access the person is significantly enlarged. It remains to
be investigated how similar or different are the mental models
people apply to personal information revelation within
a traditional network of friends compared to those that are
applied in an online network.
2.2
Privacy Implications
Privacy implications associated with online social networking
depend on the level of identifiability of the information
provided, its possible recipients, and its possible uses. Even
social networking websites that do not openly expose their
users' identities may provide enough information to identify
the profile's owner. This may happen, for example, through
face re-identification [13]. Liu and Maes estimate in [18] a
15% overlap in 2 of the major social networking sites they
studied. Since users often re-use the same or similar photos
across different sites, an identified face can be used to identify
a pseudonym profile with the same or similar face on
another site. Similar re-identifications are possible through
demographic data, but also through category-based representations
of interests that reveal unique or rare overlaps of
hobbies or tastes. We note that information revelation can
work in two ways: by allowing another party to identify a
pseudonymous profile through previous knowledge of a sub-ject's
characteristics or traits; or by allowing another party
to infer previously unknown characteristics or traits about a
subject identified on a certain site. We present evaluations
of the probabilities of success of these attacks on users of a
specific networking site in Section 4.
To whom may identifiable information be made available?
First of all, of course, the hosting site, that may use and
extend the information (both knowingly and unknowingly
revealed by the participant) in different ways (below we discuss
extracts from the privacy policy of a social networking
site that are relevant to this discussion).
Obviously, the
information is available within the network itself, whose extension
in time (that is, data durability) and space (that is,
membership extension) may not be fully known or knowable
by the participant. Finally, the easiness of joining and
extending one's network, and the lack of basic security measures
(such as SSL logins) at most networking sites make it
easy for third parties (from hackers to government agencies)
to access participants data without the site's direct collaboration
(already in 2003, LiveJournal used to receive at least
five reports of ID hijacking per day, [23]).
How can that information be used? It depends on the
information actually provided - which may, in certain cases,
be very extensive and intimate. Risks range from identity
theft to online and physical stalking; from embarrassment
to price discrimination and blackmailing.
Yet, there are
some who believe that social networking sites can also offer
the solution to online privacy problems. In an interview,
Tribe.net CEO Mark Pincus noted that "[s]ocial networking
has the potential to create an intelligent order in the current
chaos by letting you manage how public you make yourself
and why and who can contact you." [4]. We test this position
in Section 4.
While privacy may be at risk in social networking sites,
information is willingly provided. Different factors are likely
to drive information revelation in online social networks.
The list includes signalling (as discussed in [9]), because the
perceived benefit of selectively revealing data to strangers
may appear larger than the perceived costs of possible privacy
invasions; peer pressure and herding behavior; relaxed
attitudes towards (or lack of interest in) personal privacy;
incomplete information (about the possible privacy implications
of information revelation); faith in the networking
service or trust in its members; myopic evaluation of privacy
risks (see [1]); or also the service's own user interface,
that may drive the unchallenged acceptance of permeable
default privacy settings.
We do not attempt to ascertain the relative impact of
73
different drivers in this paper. However, in the following
sections we present data on actual behavioral patterns of
information revelation and inferred privacy attitudes in a
college-targeted networking site. This investigation offers
a starting point for subsequent analysis of the motivations
behind observed behaviors.
THE FACEBOOK.COM
Many users of social networking sites are of college age [8],
and recent ventures have started explicitly catering to the
college crowd and, in some cases, to specific colleges (e.g.,
the Facebook.com, but also Universitysingles.ca, quad5.com,
CampusNetwork.com, iVentster.com, and others).
College-oriented social networking sites provide opportunities
to combine online and face-to-face interactions within
an ostensibly bounded domain.
This makes them different
from traditional networking sites: they are communities
based "on a shared real space" [26]. This combination may
explain the explosive growth of some of these services (according
to [26], the Facebook has spread "to 573 campuses
and 2.4 million users. [...] [I]t typically attracts 80 percent
of a school's undergraduate population as well as a smattering
of graduate students, faculty members, and recent
alumni.") Also because of this, college-oriented networks
offer a wealth of personal data of potentially great value
to external observers (as reported by [6], for example, the
Pentagon manages a database of 16-to-25-year-old US youth
data, containing around 30 million records, and continuously
merged with other data for focused marketing).
Since many of these sites require a college's email account
for a participant to be admitted to the online social network
of that college, expectations of validity of certain personal
information provided by others on the network may
increase. Together with the apparent sharing of a physical
environment with other members of the network, that
expectation may increase the sense of trust and intimacy
across the online community. And yet, since these services
can be easily accessed by outsiders (see Section 4) and since
members can hardly control the expansion of their own network
(often, a member's network increases also through the
activity of other members), such communities turn out to
be more imagined than real, and privacy expectations may
not be matched by privacy reality.
The characteristics mentioned above make college-oriented
networking sites intriguing candidates for our study of information
revelation and privacy preferences. In the rest of
this paper we analyze data gathered from the network of
Carnegie Mellon University (CMU) students enlisted on one
of such sites, the Facebook.
The Facebook has gained huge adoption within the CMU
student community but is present with similar success at
many other colleges nationwide. It validates CMU-specific
network accounts by requiring the use of CMU email addresses
for registration and login. Its interface grants participants
very granular control on the searchability and visibility
of their personal information (by friend or location,
by type of user, and by type of data). The default settings,
however, are set to make the participants profile searchable
by anybody else in any school in the Facebook network, and
make its actual content visible to any other user at the same
college or at another college in the same physical location.
12
12
At the time of writing, the geography feature which gen-17
18 19 20 21 22 23 24 25 26 27 28 29 30 31
0
2
4
6
8
10
12
14
16
18
20
Age
Percentage of Profiles
Male
Female
Figure 1:
Age distribution of Facebook profiles at CMU.
The majority of users (95.6%) falls into the 18-24 age
bracket.
The Facebook is straightforward about the usage it plans
for the participants' personal information: at the time of
this writing, its privacy policy [30] reports that the site will
collect additional information about its users (for instance,
from instant messaging), not originated from the use of the
service itself. The policy also reports that participants' information
may include information that the participant has
not knowingly provided (for example, her IP address), and
that personal data may be shared with third parties.
3.1
Access Tools
In June 2005, we separately searched for all "female" and
all "male" profiles for CMU Facebook members using the
website's advanced search feature and extracted their profile
IDs. Using these IDs we then downloaded a total of 4540
profiles - virtually the entire CMU Facebook population at
the time of the study.
3.2
Demographics
The majority of users of the Facebook at CMU are undergraduate
students (3345 or 73.7% of all profiles; see Table
1). This corresponds to 62.1% of the total undergraduate
population at CMU [31]. Graduate students, staff and faculty
are represented to a much lesser extent (6.3%, 1.3%,
and 1.5% of the CMU population, respectively). The majority
of users is male (60.4% vs. 39.2%). Table 2 shows the
gender distribution for the different user categories. The
strong dominance of undergraduate users is also reflected in
the user age distribution shown in Figure 1. The vast majority
of users (95.6%) falls in the 18-24 age bracket. Overall
the average age is 21.04 years.
erates networks based on physical location is by default not
available to undergraduate students. However, the status of
a profile can easily be changed to e.g. "graduate student"
for which the feature is accessible.
74
Table 1: Distribution of CMU Facebook profiles for different user categories.
The majority of users are
undergraduate students. The table lists the percentage of the CMU population (for each category) that are
users of the Facebook (if available).
# Profiles
% of Facebook Profiles
% of CMU Population
Undergraduate Students
3345
74.6
62.1
Alumni
853
18.8
Graduate
Students
270
5.9
6.3
Staff
35
0.8
1.3
Faculty
17
0.4
1.5
Table 2: Gender distribution for different user categories.
# Profiles
% of Category
% of CMU Population
Male
2742
60.4
Overall
Female
1781
39.2
Male
2025
60.5
62.0
Undergraduate Students
Female
1320
39.5
62.3
Male
484
56.7
Alumni
Female
369
43.3
Male
191
70.7
6.3
Graduate Students
Female
79
29.3
6.3
Male
23
65.7
Staff
Female
12
34.3
Male
17
100
3.4
Faculty
Female
0
0.0
0.0
3.3
Types and Amount of Information
Disclosed
The Facebook offers users the ability to disclose a large
and varied amount of personal information. We evaluated
to which extent users at CMU provide personal information.
Figure 2 shows the percentages of CMU profiles that disclose
different categories of information.
In general, CMU users of the Facebook provide an astonishing
amount of information: 90.8% of profiles contain an
image, 87.8% of users reveal their birth date, 39.9% list a
phone number (including 28.8% of profiles that contain a
cellphone number), and 50.8% list their current residence.
The majority of users also disclose their dating preferences
(male or female), current relationship status (single, married
, or in a relationship), political views (from "very liberal"
to "very conservative"), and various interests (including music
, books, and movies). A large percentage of users (62.9%)
that list a relationship status other than single even identify
their partner by name and/or link to their Facebook profile.
Note that, as further discussed below in Section 3.4, Facebook
profiles tend to be fully identified with each participant's
real first and last names, both of which are used as
the profile's name. In other words, whoever views a profile
is also able to connect the real first and last name of a person
to the personal information provided - that may include
birthday or current residence.
Across most categories, the amount of information revealed
by female and male users is very similar. A notable
exception is the phone number, disclosed by substantially
more male than female users (47.1% vs. 28.9%). Single male
users tend to report their phone numbers in even higher frequencies
, thereby possibly signalling their elevated interest
0
10
20
30
40
50
60
70
80
90
100
Summer Job
Favorite Movies
Favorite Books
Favorite Music
Interests
Political Preference
Relationship Partner
Relationship Status
Dating Interests
Highschool
AIM Screenname
Phone
Address
Home Town
Birthday
Profile Image
Percentage of Profiles
Figure 2:
Percentages of CMU profiles revealing various
types of personal information.
in making a maximum amount of contact information easily
available.
Additional types of information disclosed by Facebook
users (such as the membership of one's own network of
friends at the home college or elsewhere, last login information
, class schedule, and others) are discussed in the rest
of this paper.
3.4
Data Validity and Data Identifiability
The terms of service of the site encourage users to only
publish profiles that directly relate to them and not to other
75
entities, people or fictional characters. In addition, in order
to sign up with the Facebook a valid email address of one
of the more than 500 academic institutions that the site
covers has to be provided. This requirement, along with the
site's mission of organizing the real life social networks of
their members, provides incentives for users to only publish
accurate information.
We tested how valid the published data appears to be. In
addition, we studied how identifiable or granular the provided
data is.
In general, determining the accuracy of the information
provided by users on the Facebook (or any other social networking
website) is nontrivial for all but selected individual
cases. We therefore restrict our validity evaluation to the
measurement of the manually determined perceived accuracy
of information on a randomly selected subset of 100 profiles.
3.4.1
Profile Names
We manually categorized the names given on Facebook
profiles as being one of the the following:
1. Real Name
Name appears to be real.
2. Partial Name
Only a first name is given.
3. Fake Name
Obviously fake name.
Table 3 shows the results of the evaluation. We found 89%
of all names to be realistic and likely the true names for the
users (for example, can be matched to the visible CMU email
address provided as login), with only 8% of names obviously
fake. The percentage of people that choose to only disclose
their first name was very small: 3%.
Table 3: Categorization of name quality of a random
subset of 100 profile names from the Facebook. The
vast majority of names appear to be real names with
only a very small percentage of partial or obviously
fake names.
Category
Percentage Facebook Profiles
Real Name
89%
Partial Name
3%
Fake Name
8%
In other words, the vast majority of Facebook users seem
to provide their fully identifiable names, although they are
not forced to do so by the site itself.
As comparison, 98.5% of the profiles that include a birthday
actually report the fully identified birth date (day, month,
and year), although, again, users are not forced to provide
the complete information (the remaining 1.5% of users reported
only the month or the month and day but not the
year of birth). Assessing the validity of birth dates is not
trivial. However, in certain instances we observed friends
posting birthday wishes in the comments section of the profile
of a user on the day that had been reported by the user
as her birthday. In addition, the incentives to provide a
fake birth date (rather than not providing one at all, which
is permitted by the system) would be unclear.
3.4.2
Identifiability of Images on Profile
The vast majority of profiles contain an image (90.8%,
see Section 3.3). While there is no explicit requirement to
provide a facial image, the majority of users do so. In order
to assess the quality of the images provided we manually
labelled them into one of four categories:
1. Identifiable
Image quality is good enough to enable person recognition
.
2. Semi-Identifiable
The profile image shows a person, but due to the image
composition or face pose the person is not directly
recognizable. Other aspects however (e.g. hair color,
body shape, etc.) are visible.
3. Group Image
The image contains more than one face and no other
profile information (e.g. gender) can be used to identify
the user in the image.
4. Joke Image
Images clearly not related to a person (e.g. cartoon or
celebrity image).
Table 4 shows the results of labelling the profile images into
the four categories. In the majority of profiles the images
are suitable for direct identification (61%). Overall, 80% of
images contain at least some information useful for identification
. Only a small subset of 12% of all images are clearly
not related to the profile user. We repeated the same evaluation
using 100 randomly chosen images from Friendster,
where the profile name is only the first name of the member
(which makes Friendster profiles not as identifiable as
Facebook ones). Here the percentage of "joke images" is
much higher (23%) and the percentage of images suitable
for direct identification lower (55%).
13
3.4.3
Friends Networks
The Facebook helps in organizing a real-life social network
online. Since Facebook users interact with many of the other
users directly in real-life, often on a daily basis, the network
of friends may function as profile fact checker, potentially
triggering questions about obviously erroneous information.
Facebook users typically maintain a very large network of
friends. On average, CMU Facebook users list 78.2 friends at
CMU and 54.9 friends at other schools. 76.6% of users have
25 or more CMU friends, whereas 68.6% of profiles show
25 or more non-CMU friends. See Figure 3 for histogram
plots of the distribution of sizes of the networks for friends
at CMU and elsewhere. This represents some effort, since
adding a friend requires explicit confirmation.
3.5
Data Visibility and Privacy Preferences
For any user of the Facebook, other users fall into four different
categories: friends, friends of friends, non-friend users
13
We note that Friendster's profiles used to be populated
by numerous fake and/or humorous profiles, also called
"Fakesters" (see [8]). Friendster management tried to eliminate
fake profiles and succeeded in significantly reducing
their number, but not completely extirpating them from the
network. Based on our manual calculations, the share of
fake Friendster profiles is currently comparable to the share
of fake Facebook profiles reported above.
76
Table 4: Categorization of user identifiability based on manual evaluation of a randomly selected subset of
100 images from both Facebook and Friendster profiles. Images provided on Facebook profiles are in the
majority of cases suitable for direct identification (61%). The percentage of images obviously unrelated to
a person ("joke image") is much lower for Facebook images in comparison to images on Friendster profiles
(12% vs. 23%).
Category
Percentage Facebook Profiles
Percentage Friendster Profiles
Identifiable
61%
55%
Semi-Identifiable
19%
15%
Group Image
8%
6%
Joke Image
12%
23%
0
20
40
60
80
100
120
140
160
0
0.05
0.1
Percentage of Profiles
Number of CMU Friends Listed
0
20
40
60
80
100
120
140
160
0
0.05
0.1
Percentage of Profiles
Number of Non-CMU Friends Listed
(a) Network of CMU friends
(b) Network of Non-CMU friends
Figure 3:
Histogram of the size of networks for both CMU friends (a) and non-CMU friends (b). Users maintain
large networks of friends with the average user having 78.2 friends at CMU and 54.9 friends elsewhere.
at the same institution and non-friend users at a different
institution.
14
By default, everyone on the Facebook appears
in searches of everyone else, independent of the searchers institutional
affiliation. In search results the users' full names
(partial searches for e.g. first names are possible) appear
along with the profile image, the academic institution that
the user is attending, and the users' status there. The Facebook
reinforces this default settings by labelling it "recom-mended"
on the privacy preference page. Also by default
the full profile (including contact information) is visible to
everyone else at the same institution.
Prior research in HCI has shown that users tend to not
change default settings [19]. This makes the choice of default
settings by website operators very important. On the other
hand, the site provides users a very granular and relatively
sophisticated interface to control the searchability and visibility
of their profiles. Undergrad users, for example, can
make their profiles searchable only to other undergrad users,
or only users who are friends, or users who are friends of
friends, or users at the same institution - or combinations
of the above constraints. In addition, visibility of the entire
profile can be similarly controlled. Granular control on
contact information is also provided.
Sociological theories of privacy have noted how an individual
may selectively disclose personal information to others
in order to establish different degrees of trust and intimacy
with them (see [10]). In light of these theories, we tested
14
The Facebook recently introduced a new relationship category
based on user location, e.g. Pittsburgh, which we did
not consider in this study.
how much CMU Facebook users take advantage of the ability
the site provides to manage their presentation of sel[ves].
By creating accounts at different institutions, and by using
accounts with varying degree of interconnectedness with the
rest of the CMU network, we were able to infer how individual
users within the CMU network were selecting their own
privacy preference.
3.5.1
Profile Searchability
We first measured the percentage of users that changed
the search default setting away from being searchable to
everyone on the Facebook to only being searchable to CMU
users. We generated a list of profile IDs currently in use at
CMU and compared it with a list of profile IDs visible from
a different academic institution. We found that only 1.2% of
users (18 female, 45 male) made use of this privacy setting.
3.5.2
Profile Visibility
We then evaluated the number of CMU users that changed
profile visibility by restricting access to CMU users.
We
used the list of profile IDs currently in use at CMU and
evaluated which percentage of profiles were fully accessible
to an unconnected user (not friend or friend of friend of any
profile). Only 3 profiles (0.06%) in total did not fall into
this category.
3.5.3
Facebook Data Access
We can conclude that only a vanishingly small number
of users change the (permissive) default privacy preferences.
In general, fully identifiable information such as personal
77
image and first and last name is available to anybody registered
at any Facebook member network. Since the Facebook
boasts a 80% average participation rate among undergraduate
students at the hundreds of US institutions it covers, and
since around 61% of our CMU subset provides identifiable
face images, it is relatively easy for anybody to gain access
to these data, and cheap to store a nation-wide database of
fully identified students and their IDs. In other words, information
suitable for creating a brief digital dossier consisting
of name, college affiliation, status and a profile image can be
accessed for the vast majority of Facebook users by anyone
on the website. (To demonstrate this we downloaded and
identified the same information for a total of 9673 users at
Harvard University.)
Additional personal data - such as political and sexual orientation
, residence address, telephone number, class schedule
, etc. - are made available by the majority of users to
anybody else at the same institution, leaving such data accessible
to any subject able to obtain even temporary control
of an institution's single email address.
PRIVACY IMPLICATIONS
It would appear that the population of Facebook users we
have studied is, by large, quite oblivious, unconcerned, or
just pragmatic about their personal privacy. Personal data
is generously provided and limiting privacy preferences are
sparingly used. Due to the variety and richness of personal
information disclosed in Facebook profiles, their visibility,
their public linkages to the members' real identities, and
the scope of the network, users may put themselves at risk
for a variety of attacks on their physical and online persona.
Some of these risks are common also in other online social
networks, while some are specific to the Facebook. In this
section we outline a number of different attacks and quantify
the number of users susceptible based on the data we
extracted. See Table 5 for an overview.
4.1
Stalking
Using the information available on profiles on the Facebook
a potential adversary (with an account at the same
academic institution) can determine the likely physical location
of the user for large portions of the day. Facebook
profiles include information about residence location, class
schedule, and location of last login. A students' life during
college is mostly dominated by class attendance. Therefore,
knowledge of both the residence and a few classes that the
student is currently attending would help a potential stalker
to determine the users whereabouts. In the CMU population
860 profiles fall into our definition of this category (280
female, 580 male), in that they disclose both their current
residence and at least 2 classes they are attending. Since our
study was conducted outside of the semester (when many
students might have deleted class information from their
profiles) we speculate this number to be even higher during
the semester.
A much larger percentage of users is susceptible to a form
of cyber-stalking using the AOL instant messenger (AIM).
Unlike other messengers, AIM allows users to add "buddies"
to their list without knowledge of or confirmation from the
buddy being added. Once on the buddy list the adversary
can track when the user is online. In the CMU population
77.7% of all profiles list an AIM screen name for a total of
more than 3400 users.
4.2
Re-identification
Data re-identification typically deals with the linkage of
datasets without explicit identifiers such as name and address
to datasets with explicit identifiers through common
attributes [25].
Examples include the linkage of hospital
discharge data to voter registration lists, that allows to re-identify
sensitive medical information [28].
4.2.1
Demographics re-identification
It has been shown previously that a large portion of the
US population can be re-identified using a combination of
5-digit ZIP code, gender, and date of birth [29]. The vast
majority of CMU users disclose both their full birthdate
(day and year) and gender on their profiles (88.8%). For
44.3% of users (total of 1676) the combination of birthdate
and gender is unique within CMU. In addition, 50.8% list
their current residence, for which ZIP codes can be easily
obtained. Overall, 45.8% of users list birthday, gender, and
current residence. An adversary with access to the CMU
section of the Facebook could therefore link a comparatively
large number of users to outside, de-identified data sources
such as e.g. hospital discharge data.
4.2.2
Face Re-Identification
In a related study we were able to correctly link facial
images from Friendster profiles without explicit identifiers
with images obtained from fully identified CMU web pages
using a commercial face recognizer [13]. The field of automatic
face recognition has advanced tremendously over
the last decade and is now offering a number of commercial
solutions which have been shown to perform well across a
wide range of imaging conditions [14, 17, 24]. As shown in
Section 3.4 a large number of profiles contain high quality
images. At CMU more than 2500 profiles fall in this category
15
. Potential de-identified data sources include other
social networking sites (e.g. Friendster) or dating sites (e.g.
Match.com) that typically host anonymous profiles.
4.2.3
Social Security Numbers and Identity Theft
An additional re-identification risk lies in making birthdate
, hometown, current residence, and current phone number
publicly available at the same time. This information
can be used to estimate a person's social security number
and exposes her to identity theft.
The first three digits of a social security number reveal
where that number was created (specifically, the digits are
determined by the ZIP code of the mailing address shown
on the application for a social security number). The next
two digits are group identifiers, which are assigned according
to a peculiar but predictable temporal order. The last four
digits are progressive serial numbers.
16
When a person's hometown is known, the window of the
first three digits of her SNN can be identified with probability
decreasing with the home state's populousness. When
that person's birthday is also known, and an attacker has
access to SSNs of other people with the same birthdate in
the same state as the target (for example obtained from the
SSN death index or from stolen SSNs), it is possible to pin
down a window of values in which the two middle digits
15
In fact, 90.8% of profiles have images, out of which 61%
are estimated to be of sufficient quality for re-identification.
16
See
http://www.ssa.gov/foia/stateweb.html
and
http://policy.ssa.gov/poms.nsf/lnx/0100201030.
78
Table 5: Overview of the privacy risks and number of CMU profiles susceptible to it.
Risk
# CMU Facebook Profiles
% CMU Facebook Profiles
280 (Female)
15.7 (Female)
Real-World Stalking
580 (Male)
21.2 (Male)
Online Stalking
3528
77.7
Demographics Re-Identification
1676
44.3
Face Re-Identification
2515 (estimated)
55.4
are likely to fall. The last four digits (often used in unpro-tected
logins and as passwords) can be retrieved through
social engineering. Since the vast majority of the Facebook
profiles we studied not only include birthday and hometown
information, but also current phone number and residence
(often used for verification purposes by financial institutions
and other credit agencies), users are exposing themselves to
substantial risks of identity theft.
4.3
Building a Digital Dossier
The privacy implications of revealing personal and sensitive
information (such as sexual orientation and political
views) may extend beyond their immediate impact, which
can be limited. Given the low and decreasing costs of storing
digital information, it is possible to continuously monitor
the evolution of the network and its users' profiles, thereby
building a digital dossier for its participants. College students
, even if currently not concerned about the visibility
of their personal information, may become so as they enter
sensitive and delicate jobs a few years from now - when the
data currently mined could still be available.
4.4
Fragile Privacy Protection
One might speculate that the perceived privacy protection
of making personal information available only to members
of a campus community may increase Facebook users' willingness
to reveal personal information. However, the mechanisms
protecting this social network can be circumvented.
Adding to this the recognition that users have little control
on the composition of their own networks (because often a
member's friend can introduce strangers into that member's
network), one may conclude that the personal information
users are revealing even on sites with access control and
managed search capabilities effectively becomes public data.
4.4.1
Fake Email Address
The Facebook verifies users as legitimate members of a
campus community by sending a confirmation email containing
a link with a seemingly randomly generated nine
digit code to the (campus) email address provided during
registration. Since the process of signing up and receiving
the confirmation email only takes minutes, an adversary simply
needs to gain access to the campus network for a very
short period of time. This can be achieved in a number of
well-known ways, e.g. by attempting to remotely access a
hacked or virus-infected machine on the network or physi-cally
accessing a networked machine in e.g. the library, etc.
4.4.2
Manipulating Users
Social engineering is a well-known practice in computer
security to obtain confidential information by manipulating
legitimate users [22]. Implementation of this practice on the
Facebook is very simple: just ask to be added as someone's
friend. The surprisingly high success rate of this practice
was recently demonstrated by a Facebook user who, using
an automatic script, contacted 250,000 users of the Facebook
across the country and asked to be added as their
friend. According to [15], 75,000 users accepted: thirty percent
of Facebook users are willing to make all of their profile
information available to a random stranger and his network
of friends.
4.4.3
Advanced Search Features
While not directly linked to from the site, the Facebook
makes the advanced search page of any college available to
anyone in the network. Using this page various profile information
can be searched for, e.g. relationship status, phone
number, sexual preferences, political views and (college) residence
. By keeping track of the profile IDs returned in the
different searches a significant portion of the previously inaccessible
information can be reconstructed.
CONCLUSIONS
Online social networks are both vaster and looser than
their offline counterparts. It is possible for somebody's profile
to be connected to hundreds of peers directly, and thousands
of others through the network's ties. Many individuals
in a person's online extended network would hardly be defined
as actual friends by that person; in fact many may be
complete strangers. And yet, personal and often sensitive
information is freely and publicly provided.
In our study of more than 4,000 CMU users of the Facebook
we have quantified individuals' willingness to provide
large amounts of personal information in an online social
network, and we have shown how unconcerned its users appear
to privacy risks: while personal data is generously provided
, limiting privacy preferences are hardly used; only a
small number of members change the default privacy preferences
, which are set to maximize the visibility of users profiles
. Based on the information they provide online, users
expose themselves to various physical and cyber risks, and
make it extremely easy for third parties to create digital
dossiers of their behavior.
These risks are not unique to the Facebook. However, the
Facebook's public linkages between an individual profile and
the real identity of its owner, and the Facebook's perceived
connection to a physical and ostensibly bounded community
(the campus), make Facebook users a particularly interesting
population for our research.
Our study quantifies patterns of information revelation
and infers usage of privacy settings from actual field data,
rather than from surveys or laboratory experiments. Still,
the relative importance of the different drivers influencing
Facebook users' information revelation behavior has to be
quantified. Our evidence is compatible with a number of
79
different hypotheses. In fact, many simultaneous factors are
likely to play a role. Some evidence is compatible with a
signalling hypothesis (see Section 3.3): users may be prag-matically
publishing personal information because the benefits
they expect from public disclosure surpass its perceived
costs. Yet, our evidence is also compatible with an interface
design explanation, such as the acceptance (and possibly ignorance
) of the default, permeable settings (see Section 3.5).
Peer pressure and herding behavior may also be influencing
factors, and so also myopic privacy attitudes (see [1]) and
the sense of protection offered by the (perceived) bounds of
a campus community. Clarifying the role of these different
factors is part of our continuing research agenda.
Acknowledgements
We would like to thank Anne Zimmerman and Bradley Ma-lin
for first bringing the Facebook to our attention.
We
would also like to thank danah boyd, Lorrie Cranor, Julie
Downs, Steven Frank, Julia Gideon, Charis Kaskiris, Bart
Nabbe, Mike Shamos, Irina Shklovski, and four anonymous
referees for comments. This work was supported in part by
the Data Privacy Lab in the School of Computer Science
and by the Berkman Fund at Carnegie Mellon University.
REFERENCES
[1] A. Acquisti. Privacy in electronic commerce and the
economics of immediate gratification. In Proceedings
of the ACM Conference on Electronic Commerce (EC
'04), pages 2129, 2004.
[2] B. Anderson. Imagined Communities: Reflections on
the Origin and Spread of Nationalism. Verso, London
and New York, revised edition, 1991.
[3] S. Arrison. Is Friendster the new TIA?
TechCentralStation, January 7, 2004.
[4] J. Black. The perils and promise of online schmoozing.
BusinessWeek Online, February 20, 2004.
[5] J. Brown. Six degrees to nowhere. Salon.com,
September 21, 1998.
[6] D. Cave. 16 to 25? Pentagon has your number, and
more. The New York Times, June 24, 2005.
[7] d. boyd. Reflections on friendster, trust and intimacy.
In Intimate (Ubiquitous) Computing Workshop Ubicomp
2003, October 12-15, Seattle, Washington,
USA, 2003.
[8] d. boyd. Friendster and publicly articulated social
networking. In Conference on Human Factors and
Computing Systems (CHI 2004), April 24-29, Vienna,
Austria, 2004.
[9] J. Donath and d. boyd. Public displays of connection.
BT Technology Journal, 22:7182, 2004.
[10] S. Gerstein. Intimacy and privacy. In F. D. Schoeman,
editor, Philosophical Dimensions of Privacy: An
Anthology. Cambridge University Press, Cambridge,
UK, 1984.
[11] M. Granovetter. The strength of weak ties. American
Journal of Sociology, 78:13601380, 1973.
[12] M. Granovetter. The strength of weak ties: A network
theory revisited. Sociological Theory, 1:201233, 1983.
[13] R. Gross. Re-identifying facial images. Technical
report, Carnegie Mellon University, Institute for
Software Research International, 2005. In preparation.
[14] R. Gross, J. Shi, and J. Cohn. Quo vadis face
recognition? In Third Workshop on Empirical
Evaluation Methods in Computer Vision, 2001.
[15] K. Jump. A new kind of fame. The Columbian
Missourian, September 1, 2005.
[16] A. Leonard. You are who you know. Salon.com, June
15, 2004.
[17] S. Li and A. Jain, editors. Handbook of Face
Recognition. Springer Verlag, 2005.
[18] H. Liu and P. Maes. Interestmap: Harvesting social
network profiles for recommendations. In Beyond
Personalization - IUI 2005, January 9, San Diego,
California, USA, 2005.
[19] W. Mackay. Triggers and barriers to customizing
software. In Proceedings of CHI'91, pages 153160.
ACM Press, 1991.
[20] S. Milgram. The small world problem. Psychology
Today, 6:6267, 1967.
[21] S. Milgram. The familiar stranger: An aspect of urban
anonymity. In S. Milgram, J. Sabini, and M. Silver,
editors, The Individual in a Social World: Essays and
Experiments. Addison-Wesley, Reading, MA, 1977.
[22] K. Mitnick, W. Simon, and S. Wozniak. The art of
deception: controlling the human element of security.
John Wiley & Sons, 2002.
[23] A. Newitz. Defenses lacking at social network sites.
SecurityFocus, December 31, 2003.
[24] P. Phillips, P. Flynn, T. Scruggs, K. Bowyer,
J. Chang, K. Hoffman, J. Marques, J. Min, and
J. Worek. Overview of the face recognition grand
challenge. In IEEE Conference on Computer Vision
and Pattern Recognition, June 20-25, San Diego,
California, USA, 2005.
[25] P. Samarati and L. Sweeney. Protecting privacy when
disclosing information: k-anonymity and its
enforcement through generalization and cell
suppression. Technical report, SRI International, 1998.
[26] I. Sege. Where everybody knows your name.
Boston.com, April 27, 2005.
[27] L. J. Strahilevitz. A social networks theory of privacy.
The Law School, University of Chicago, John M. Olin
Law & Economics Working Paper No. 230 (2D Series),
December 2004.
[28] L. Sweeney. k-Anonymity: a model for protecting
privacy. International Journal on Uncertainty,
Fuzziness and Knowledge-based Systems,
10(5):557570, 2002.
[29] L. Sweeney. Uniqueness of simple demographics in the
U.S. population. Technical report, Carnegie Mellon
University, Laboratory for International Data Privacy,
2004.
[30] The Facebook. Privacy policy.
http://facebook.com/policy.php, August 2005.
[31] University Planning. Carnegie Mellon Factbook 2005.
Carnegie Mellon University, February 2005.
[32] D. Watts. Six Degrees: The Science of a Connected
Age. W.W.Norton & Company, 2003.
80 | information relevation;privacy;social networking sites;information revelation;privacy risk;Online privacy;online social networking;online behavior;college;social network theory;facebook;stalking;re-identification;data visibility;privacy perference |
114 | Integrating the Document Object Model with Hyperlinks for Enhanced Topic Distillation and Information Extraction | Topic distillation is the process of finding authoritative Web pages and comprehensive "hubs" which reciprocally endorse each other and are relevant to a given query. Hyperlink-based topic distillation has been traditionally applied to a macroscopic Web model where documents are nodes in a directed graph and hyperlinks are edges. Macroscopic models miss valuable clues such as banners, navigation panels , and template-based inclusions, which are embedded in HTML pages using markup tags. Consequently, results of macroscopic distillation algorithms have been deteriorating in quality as Web pages are becoming more complex. We propose a uniform fine-grained model for the Web in which pages are represented by their tag trees (also called their Document Object Models or DOMs) and these DOM trees are interconnected by ordinary hyperlinks. Surprisingly, macroscopic distillation algorithms do not work in the fine-grained scenario. We present a new algorithm suitable for the fine-grained model. It can dis-aggregate hubs into coherent regions by segmenting their DOMtrees. Mutual endorsement between hubs and authorities involve these regions , rather than single nodes representing complete hubs. Anecdotes and measurements using a 28-query, 366000-document benchmark suite, used in earlier topic distillation research, reveal two benefits from the new algorithm: distillation quality improves, and a by-product of distillation is the ability to extract relevant snippets from hubs which are only partially relevant to the query. | Introduction
Kleinberg's Hyperlink Induced Topic Search (HITS) [14]
and the PageRank algorithm [3] underlying Google have
revolutionized ranking technology for Web search engines.
PageRank evaluates the "prestige score" of a page as roughly
proportional to the sum of prestige scores of pages citing it
(Note:
To view the HTML version using Netscape, add the
following line to your ~/.Xdefaults or ~/.Xresources file:
Netscape*documentFonts.charset*adobe-fontspecific: iso-8859-1
For printing use the PDF version, as browsers may not print the
mathematics properly.)
Copyright is held by author/owner.
WWW10, May 15, 2001, Hong Kong.
ACM1-58113-348-0/01/0005.
using hyperlinks. HITS also identifies collections of resource
links or "hubs" densely coupled to authoritative pages on a
topic. The model of the Web underlying these and related
systems is a directed graph with pages (HTML files) as
nodes and hyperlinks as edges.
Since those papers were published, the Web has been
evolving in fascinating ways, apart from just getting larger.
Web pages are changing from static files to dynamic views
generated from complex templates and backing semi-structured
databases. A variety of hypertext-specific idioms such
as navigation panels, advertisement banners, link exchanges,
and Web-rings, have been emerging.
There is also a migration of Web content from syntac-tic
HTML markups towards richly tagged, semi-structured
XML documents (http://www.w3.org/XML/) interconnected
at the XML element level by semantically rich links (see,
e.g., the XLink proposal at http://www.w3.org/TR/xlink/).
These refinements are welcome steps to implementing what
Berners-Lee and others call the semantic Web (http://www.
w3.org/1999/04/13-tbl.html), but result in document, file,
and site boundaries losing their traditional significance.
Continual experiments performed by several researchers
[2, 15] reveal a steady deterioration of distillation quality
through the last few years. In our experience, poor results
are frequently traced to the following causes:
Links have become more frequent and "noisy" from
the perspective of the query, such as in banners, navigation
panels, and advertisements. Noisy links do not
carry human editorial endorsement, a basic assumption
in topic distillation.
Hubs may be "mixed", meaning only a portion of
the hub may be relevant to the query. Macroscopic
distillation algorithms treat whole pages as atomic, indivisible
nodes with no internal structure. This leads
to false reinforcements and resulting contamination of
the query responses.
Thanks in part to the visibility of Google, content creators
are well aware of hyperlink-based ranking technology.
One reaction has been the proliferation of nepotistic "clique
attacks"--a collection of sites linking to each other without
semantic reason, e.g. http://www.411fun.com, http://
www.411fashion.com and http://www.411-loans.com. (Figures
8 and 9 provide some examples.) Some examples look
suspiciously like a conscious attempt to spam search engines
that use link analysis. Interestingly, in most cases, the visual
presentation clearly marks noisy links which surfers rarely
follow, but macroscopic algorithms are unable to exploit it.
211
<html>
<head>
<title>Portals</title>
</head>
<body>
<ul>
<li>
<a href="...">Yahoo</a>
</li>
<li>
<a href="...">Lycos</a>
</li>
</ul>
</body>
</html>
html
head
body
title
ul
li
li
a
a
Figure 1: In the fine-grained model, DOMs for individual pages
are trees interconnected by ordinary hyperlinks.Each triangle is
the DOM tree corresponding to one HTML page.Green boxes
represent text.
Many had hoped that HITS-like algorithms would put
an end to spamming, but clearly the situation is more like
an ongoing arms-race. Google combines link-based ranking
with page text and anchor text in undisclosed ways, and
keeps tweaking the combination, but suffers an occasional
embarrassment
1
.
Distillation has always been observed to work well for
"broad" topics (for which there exist well-connected relevant
Web subgraphs and "pure" hubs) and not too well for
"narrow" topics, because w.r.t. narrow topics most hubs are
mixed and have too many irrelevant links. Mixed hubs and
the arbitrariness of page boundaries have been known to
produce glitches in the Clever system [6]: there has been
no reliable way to classify hubs as mixed or pure.
If a
fine-grained model can suitably dis-aggregate mixed hubs,
distillation should become applicable to narrow queries too.
Yet another motivation for the fine-grained model comes
from the proliferation of mobile clients such as cell-phones
and PDAs with small or no screens. Even on a conventional
Web browser, scrolling through search results for promising
responses, then scrolling through those responses to satisfy
a specific information need are tedious steps. The tedium is
worse on mobile clients. Search engines that need to serve
mobile clients must be able to pinpoint narrow sections of
pages and sites that address a specific information need, and
limit the amount of extra matter sent back to the client [4].
1.1
Our contributions
We initiate a study of topic distillation with a fine-grained
model of the Web, built using the Document Object Model
(DOM) of HTML pages. The DOM can model reasonably
clean HTML, support XML documents that adhere to rigid
schema definitions, and embed free text in a natural way.
In our model, HTML pages are represented by their DOMs
and these DOMtrees are interconnected by ordinary hyperlinks
(figure 1). The sometimes artificial distinction between
Web-level, site-level, page-level, and intra-page structures
is thereby blurred.
Surprisingly, macroscopic distillation
algorithms perform poorly in the fine-grained setting; we
demonstrate this using analysis and anecdotes. Our main
technical contribution is a new fine-grained distillation al-1
http://searchenginewatch.com/sereport/99/11-google.html
(local copy GoogleDrEvil.html) and http://searchenginewatch.com/
sereport/01/02-bush.html (local copy GoogleBush.html) provide some
samples.
Bibliometry,
Graph theory
PageRank/
Google
HITS
Clever@IBM
Exploiting
anchor text
Topic distillation
@Compaq
Outlier
elimination
DOM
structure
This paper
Figure 2: This work in the context of HITS and related research.
gorithm which can identify mixed hubs and segment their
corresponding DOMtrees into maximal subtrees which are
"coherent" w.r.t. the query, i.e., each is almost completely
relevant or completely irrelevant. The segmentation algorithm
uses the Minimum Description Length (MDL) principle
[16] from Information Theory [9]. Rather than collapse
these diverse hub subtrees into one node, the new algorithm
allocates a node for each subtree. This intermediate
level of detail, between the macroscopic and the fine-grained
model, is essential to the success of our algorithm.
We
report on experiments with 28 queries involving over 366000
Web pages.
This benchmark has been used in previous
research on resource compilation and topic distillation [5,
2, 6]. Our experience is that the fine-grained model and
algorithm significantly improve the quality of distillation,
and are capable of extracting DOMsubtrees from mixed
hubs that are relevant to the query.
We note that in this study we have carefully and deliberately
isolated the model from possible influences of text
analysis. By controlling our experimental environment to
not use text, we push HITS-like ideas to the limit, evaluating
exactly the value added by information present in DOM
structures. In ongoing work, we have added textual support
to our framework and obtained even better results [7].
1.2
Benefits and applications
Apart from offering a more faithful model of Web content,
our approach enables solutions to the following problems.
Better topic distillation: We show less tendency for topic
drift and contamination when the fine-grained model is used.
Web search using devices with small or no screen:
The ability to identify page snippets relevant to a query is
attractive to search services suitable for mobile clients.
Focused crawling: Identification of relevant DOMsubtrees
can be used to better guide a focused crawler's link
expansion [8].
Annotation extraction: Experiments with a previous macroscopic
distillation algorithm (Clever [6]) revealed that volunteers
preferred Clever to Yahoo! only when Yahoo!'s manual
site annotations were removed in a blind test. Our work may
improve on current techniques for automatic annotation extraction
[1] by first collecting candidate hub page fragments
and then subjecting the text therein to further segmentation
techniques.
Data preparation for linguistic analysis: Information
extraction is a natural next step after resource discovery. It
is easier to build extractors based on statistical and linguistic
models if the domain or subject matter of the input documents
is suitably segmented [12], as is effected by our hub
subtree extraction technique, which is a natural successor to
resource discovery, and a precursor to linguistic analysis.
212
1.3
Outline of the paper
In
2.1 we review HITS and related algorithms. This section
can be skipped by a reader who is familiar with HITS-related
literature. In
2.2 we illustrate some recent and growing
threats to the continued success of macroscopic distillation
algorithms. We show why the fine-grained model does not
work with traditional HITS-like approaches in
3, and then
propose our framework in
4. We report on experimental
results in
5 and conclude in 6 with some comments on
ongoing and future work.
Preliminaries
We review the HITS family of algorithms and discuss how
they were continually enhanced to address evolving Web
content.
2.1
Review of HITS and related systems
The HITS algorithm [14] started with a query
q which was
sent to a text search engine. The returned set of pages
R
q
was fetched from the Web, together with any pages having a
link to any page in
R
q
, as well as any page cited in some page
of
R
q
using a hyperlink. Links that connected pages on the
same Web server (based on canonical host name match) were
dropped from consideration because they were often seen to
serve only a navigational purpose, or were "nepotistic" in
nature.
Suppose the resulting graph is
G
q
= (
V
q
, E
q
). We will
drop the subscript
q where clear from context. Each node v
in
V is assigned two scores: the hub score h(v) and the authority
score
a(v), initialized to any positive number. Next
the HITS algorithm alternately updates a and h as follows:
a(v) =
(u,v)E
h(u) and h(u) =
(u,v)E
a(v), making
sure after each iteration to scale a and h so that
v
h(v) =
v
a(v) = 1, until the ranking of nodes by a and h stabilize
(see figure 3).
If
E is represented in the adjacency matrix format (i.e.,
E[i, j] = 1 if there is an edge (i, j) and 0 otherwise) then the
above operation can be written simply as a =
E
T
h and h =
Ea, interspersed with scaling to set |h|
1
=
|a|
1
= 1. The
HITS algorithm effectively uses power iterations [11] to find
a, the principal eigenvector of
E
T
E; and h, the principal
eigenvector of
EE
T
.
Pages with large
a are popular or
authoritative sources of information; pages with large
h are
good collections of links.
A key feature of HITS is how endorsement or popularity
diffuses to siblings. If (
u, v) and (u, w) are edges and somehow
a(v) becomes large, then in the next iteration h(u) will
increase, and in the following iteration,
a(w) will increase.
We will describe this as "
v's authority diffuses to w through
the hub
u." This is how sibling nodes reinforce each other's
authority scores. We will revisit this property later in
3.
Google has no notion of hubs. Roughly speaking, each
page
v has a single "prestige" score p(v) called its PageRank
[3] which is defined as proportional to
(u,v)E
p(u), the
sum of prestige scores of pages
u that cite v. Some conjecture
that the prestige model is adequate for the living Web,
because good hubs readily acquire high prestige as well. Our
work establishes the value of a bipartite model like HITS,
and indeed, the value of an asymmetric model where hubs
Expanded graph
Rootset
Keyword
Search
engine
Query
a = Eh
h = E
T
a
h
a
h
h
h
a
a
a
hello
world
stdio
Centroid of
rootset
Similarity
cone
Distant
vectors
pruned
Figure 3: (a) HITS, a macroscopic topic distillation algorithm
with uniform edge weights; (b) The B&H algorithm, apart from
using non-uniform edge weights, discards pages in the expanded
set which are too dissimilar to the rootset pages to prevent topic
drift.Documents are represented as vectors with each component
representing one token or word [17].
are analyzed quite differently from authorities. Therefore
we will not discuss prestige-based models any further.
2.2
The impact of the evolving Web on
hyperlink analysis
Elegant as the HITS model is, it does not adequately capture
various idioms of Web content. We discuss here a slew of
follow-up work that sought to address these issues.
Kleinberg dropped links within the same Web-site from
consideration because these were often found to be navigational
, "nepotistic" and noisy. Shortly after HITS was published
, Bharat and Henzinger (B&H [2]) found that nepotism
was not limited to same-site links. In many trials with
HITS, they found two distinct sites
s
1
and
s
2
, where
s
1
hosted a number of pages
u linking to a page v on s
2
, driving
up
a(v) beyond what may be considered fair. B&H proposed
a simple and effective fix for such "site-pair" nepotism: if
k
pages on
s
1
point to
v, let the weight of each of these links
be 1
/k, so that they add up to one, assuming a site (not a
page) is worth one unit of voting power.
Later work in the Clever system [6] used a small edge
weight for same-site links and a larger weight for other links,
but these weights were tuned empirically by evaluating the
results on specific queries.
Another issue with HITS were "mixed hubs" or pages
u that included a collection of links of which only a subset
was relevant to a query. Because HITS modeled
u as a single
node with a single
h score, high authority scores could diffuse
from relevant links to less relevant links. E.g., responses to
the query movie awards sometimes drifted into the neighboring
, more densely linked domain of movie companies.
Later versions of Clever tried to address the issue in
two ways. First, links within a fixed number of tokens of
query terms were assigned a large edge weight (the width
of the "activation window" was tuned by trial-and-error).
Second, hubs which were "too long" were segmented at a few
prominent boundaries (such as <UL> or <HR>) into "pagelets"
with their own scores. The boundaries were chosen using a
static set of rules depending on the markup tags on those
pages alone.
To avoid drift, B&H also computed a vector space representation
[17] of documents in the response set (shown in
Figure 3) and then dropped pages that were judged to be
"outliers" using a suitable threshold of (cosine) similarity to
the vector space centroid. B&H is effective for improving
precision, but may reduce recall if mixed hubs are pruned
because of small similarity to the root set centroid. This
213
Query term
Activation
window
Figure 4: Clever uses a slightly more detailed page model than
HITS.Hyperlinks near query terms are given heavier weights.
Such links are shown as thicker lines.
may in turn distort hub and authority scores and hence the
desired ranking. Losing a few hubs may not be a problem
for broad queries but could be serious for narrower queries.
As resource discovery and topic distillation become more
commonplace, we believe the quest will be for every additional
resource than can possibly be harvested, not merely
the ones that "leap out at the surfer." Our goal should
therefore be to extract relevant links and annotations even
from pages which are partially or largely irrelevant.
Generalizing hyperlinks to interconnected DOMs
HTML documents have always embedded many sources of
information (other that text) which have been largely ignored
in previous distillation research. Markups are one
such source. From a well-formed HTML document, it ought
to be possible to extract a tree structure called the Document
Object Model (DOM). In real life HTML is rarely
well formed, but using a few simple patches, it is possible
to generate reasonably accurate DOMs. For XML sources
adhering to a published DTD, a DOMis precise and well
defined.
For simplicity, we shall work with a greatly pared-down
version of the DOMfor HTML pages. We will discard all
text, and only retain those paths in the DOMtree that lead
from the root to a leaf which is an <A...> element with an
HREF leading to another page.
Hyperlinks always originate from leaf DOMelements,
typically deep in the DOMtree of the source document. If
same-site links are ignored, very few macro-level hyperlinks
target an internal node in a DOMtree (using the "#" modifier
in the URL). To simplify our model (and experiments)
we will assume that the target of a hyperlink is always the
root node of a DOMtree. In our experiments we found very
few URLs to be otherwise.
A first-cut approach (which one may call MicroHITS )
would be to use the fine-grained graph directly in the HITS
algorithm. One may even generalize "same-site" to "same-DOM"
and use B&H-like edge-weights. This approach turns
out to work rather poorly.
To appreciate why, consider two simple example graphs
shown in Figure 5 and their associated eigenvectors. The
first graph is for the macro setting. Expanding out a
E
T
Ea we get
a(2) a(2) + a(3) and
Bipar
tite
c
o
r
e
DOM Tree
1
2
3
=
=
1
1
0
1
1
0
0
0
0
;
0
0
0
0
0
0
1
1
0
E
E
E
T
3
2
4
1
5
=
=
1
0
0
0
0
0
1
0
1
0
0
0
0
0
0
0
1
0
2
0
0
0
0
0
0
;
0
1
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
0
1
0
0
0
0
0
E
E
E
T
Figure 5: A straight-forward application of HITS-like algorithms
to a DOM graph may result in some internal DOM nodes blocking
the diffusion of authority across siblings.
a(3) a(2) + a(3),
which demonstrates the mutual reinforcement. In the second
example nodes numbered 3 and 4 are part of one DOM
tree. This time, we get
a(2) 2a(2) + a(4) and
a(4) a(2) + a(4),
but there is no coupling between
a(2) and a(5), which we
would expect at the macroscopic level. Node 4 (marked
red) effectively blocks the authority from diffusing between
nodes 2 and 5.
One may hope that bigger DOMtrees and multiple paths
to authorities might alleviate the problem, but the above
example really depicts a basic problem. The success of HITS
depends critically on reinforcement among bipartite cores
(see figure 5) which may be destroyed by the introduction
of fine-grained nodes.
Proposed model and algorithm
At this point the dilemma is clear: by collapsing hubs into
one node, macroscopic distillation algorithms lose valuable
detail, but the more faithful fine-grained model prevents
bipartite reinforcement.
In this section we present our new model and distillation
algorithm that resolves the dilemma. Informally, our model
of hub generation enables our algorithm to find a cut or
frontier across each hub's DOMtree. Subtrees attached
to these cuts are made individual nodes in the distillation
graph. Thus the hub score of the entire page is dis-aggregated
at this intermediate level. The frontiers are not computed
one time as a function of the page alone, neither do they
remain unchanged during the HITS iterations. The frontiers
are determined by the current estimates of the hub scores of
the leaf HREF nodes.
We will first describe the hub segmentation technique
and then use it in a modified iterative distillation algorithm.
4.1
Scoring internal micro-hub nodes
Macroscopic distillation algorithms rank and report complete
hub pages, even if they are only partially relevant. In
this section we address the problem of estimating the hub
score of each DOMnode in the fine-grained graph, given an
estimate of authority scores. Because inter-page hyperlinks
originate in leaf DOMnodes and target root nodes of DOM
trees, we will also assume that only those DOMnodes that
are document roots can have an authority score.
214
At the end of the h
Ea substep of MicroHITS, leaf
DOMnodes get a hub score. Because leaf nodes point to
exactly one page via an HREF, the hub score is exactly the
authority score of the target page. Macroscopic distillation
algorithms in effect aggregate all the leaf hub scores for a
page into one hub score for the entire page. Reporting leaf
hub scores in descending order would be useless, because
they would simply follow the authority ranking and fail to
identify good hub aggregates.
Instead of the total hub score, one may consider the
density of hub scores in a subtree, which may be defined as
the total hub score in the subtree divided by the number of
HREF leaves. The maximum density will be achieved by the
leaf node that links to the best authority. In our experience
small subtrees with small number of leaves dominate the
top ranks, again under-aggregating hub scores and pitting
ancestor scores against descendant scores.
4.1.1
A generative model for hubs
To help us find suitable frontiers along which we can aggregate
hub scores, we propose the following generative model
for hubs.
Imagine that the Web has stopped changing and with
respect to a fixed query, all Web pages have been manually
rated for their worth as hubs. From these hub scores, one
may estimate that the hub scores have been generated from
a distribution
0
. (E.g.,
0
may represent an exponential
distribution with mean 0
.005.) If the author of a hub page
sampled URLs at random to link to, the distribution of hub
scores at the leaves of the page would approach the global
distribution provided enough samples were taken.
However, authors differ in their choice of URLs. Hub
authors are not aware of all URLs relevant to a given query
or their relative authority; otherwise all hubs authored on
a topic would be complete and identical, and therefore all
but one would be pointless to author. (Here we momentarily
ignore the value added by annotations and commentaries on
hub pages.)
Therefore, the distribution of hub scores for pages composed
by a specific author will be different from
0
. (E.g.,
the author's personal average of hub scores may be 0
.002,
distributed exponentially.) Moreover, the authors of mixed
hubs deliberately choose to dedicate not the entire page, but
only a fragment or subtree of it, to URLs that are relevant
to the given query. (As an extreme case a subtree could be
a single HREF.)
We can regard the hub generation process as a progressive
specialization of the hub score distribution starting from
the global distribution. For simplicity, assume all document
roots are attached to a "super-root" which corresponds to
the global distribution
0
. As the author works down the
DOMtree, "corrections" are applied to the score distribution
at nodes on the path.
At some suitable depth, the author fixes the score distribution
and generates links to pages so that hub scores
follow that distribution. This does not mean that there are
no interesting DOMsubtrees below this depth. The model
merely posits that up to some depth, DOMstructure is indicative
of systematic choices of score distributions, whereas
beyond that depth variation is statistical.
0
Global distribution
Progressive
`distortion'
Model
frontier
Other pages
v
u
Cumulative distortion cost =
KL(
0
;
u
) + ... + KL(
u
;
v
)
Data encoding cost is roughly
v
H
h
v
h
)
|
Pr(
log
`Hot' subtree
`Cold' subtree
Figure 6: Our fine-grained model of Web linkage which unifies
hyperlinks and DOM structure.
4.1.2
Discovering DOM frontiers from generated
hubs
During topic distillation we observe pages which are the
outcome of the generative process described above, and our
goal is to discover the "best" frontier at which the score
distributions were likely to have been fixed.
A balancing act is involved here: one may choose a
large and redundant frontier near the leaves and model the
many small, homogeneous subtrees (each with a different
distribution
w
) attached to that frontier accurately, or one
may choose a short frontier near the root with a few subtrees
which are harder to model because they contain diverse hub
scores. The balancing act requires a common currency to
compare the cost of the frontier with the cost of modeling
hub score data beneath the frontier.
This is a standard problem in segmentation, clustering,
and model estimation. A particularly successful approach to
optimizing the trade-off is to use the Minimum Description
Length (MDL) principle [16]. MDL provides a recipe for
bringing the cost of model corrections to the same units as
the cost for representing data w.r.t a model, and postulates
that "learning" is equivalent to minimizing the sum total of
model and data encoding costs.
Data encoding cost:
First we consider the cost of encoding
all the
h-values at the leaves of a subtree rooted at
node
w. Specifically, let the distribution associated with w
be
w
. The set of HREF leaf nodes in the subtree rooted at
node
w is denoted L
w
, and the set of hub scores at these
leaves is denoted
H
w
. As part of the solution we will need
to evaluate the number of bits needed to encode
h-values in
H
w
using the model
w
. There are efficient codes which can
achieve a data encoding length close to Shannon's entropy-based
lower bound [9] of
hH
w
log Pr
w
(
h)
bits
,
(1)
where Pr
w
(
h) is the probability of hub score h w.r.t. a
distribution represented by
w
. (E.g.,
w
may include the
mean and variance of a normal distribution.) We will use
this lower bound as an approximation to our data encoding
cost. (This would work if the
h-values followed a discrete
probability distribution, which is not the case with hub
scores. We will come back to this issue in
4.2.)
215
Model encoding cost:
Next we consider the model encoding
cost. Consider node
v in the DOMtree. We will
assume that
0
is known to all, and use the path from the
global root to
v to inductively encode each node w.r.t its
parent. Suppose we want to specialize the distribution
v
of
some
v away from
u
, the distribution of its parent
u. The
cost for specifying this change is given by the well-known
Kullback-Leibler (KL) distance [9]
KL(
u
;
v
), expressed
as
KL(
u
;
v
) =
x
Pr
u
(
x) log Pr
u
(
x)
Pr
v
(
x) .
(2)
Intuitively, this is the cost of encoding the distribution
v
w.r.t. a reference distribution
u
. E.g., if
X is a binary random
variable and its probabilities of being zero and one are
(
.2, .8) under
1
and (
.4, .6) under
2
, then
KL(
2
;
1
) =
.4 log
.4
.2
+
.6 log
.6
.8
. Unlike in the case of entropy, the sum
can be taken to an integral in the limit for a continuous
variable
x. Clearly for
u
=
v
, the KL distance is zero;
it can also be shown that this is a necessary condition, and
that the KL distance is asymmetric in general but always
non-negative.
If
u
is specialized to
v
and
v
is specialized to
w
,
the cost is additive, i.e.,
KL(
u
;
v
) +
KL(
v
;
w
). We
will denote the cost of such a path as
KL(
u
;
v
;
w
).
Moreover, the model encoding cost of
v starting from the
global root model will be denoted
KL(
0
;
. . . ;
v
).
Combined optimization problem:
Given the model at
the parent node
u
and the observed data
H
v
, we should
choose
v
so as to minimize the sum of the KL distance and
data encoding cost:
KL(
v
;
u
)
hH
v
log Pr
v
(
h).
(3)
If
v
is expressed parametrically, this will involve an optimization
over those parameters.
With the above set-up, we are looking for a cut or frontier
F across the tree, and for each v F , a
v
, such that
vF
KL(
0
;
. . . ;
v
)
hH
v
log Pr
v
(
h)
(4)
is minimized. The first part expresses the total model encoding
cost of all nodes
v on the frontier F starting from
the global root distribution. The second part corresponds
to the data encoding cost for the set of hub scores
H
v
at
the leaves of the subtrees rooted at the nodes
v. Figure 6
illustrates the two costs.
4.2
Practical considerations
The formulation above is impractical for a number of reasons
. There is a reduction from the knapsack problem to
the frontier-finding problem. Dynamic programming can be
used to give close approximations [13, 18], but with tens of
thousands of macro-level pages, each with hundreds of DOM
nodes, something even simpler is needed. We describe the
simplifications we had to make to control the complexity of
our algorithm.
We use the obvious greedy expansion strategy. We initialize
our frontier with the global root and keep picking
a node
u from the frontier to see if expanding it to its
immediate children
{v} will result in a reduction in code
length, if so we replace
u by its children, and continue until
no further improvement is possible. We compare two costs
locally at each
u:
The cost of encoding all the data in H
u
with respect
to model
u
.
The cost of expanding u to its children, i.e.,
v
KL(
u
;
v
), plus the cost of encoding the subtrees
H
v
with respect to
v
.
If the latter cost is less, we expand
u, otherwise, we prune
it, meaning that
u becomes a frontier node.
Another issue is with optimizing the model
v
. Usually,
closed form solutions are rare and numerical optimization
must be resorted to; again impractical in our setting.
In practice, if
H
v
is moderately large, the data encoding
cost tends to be larger than the model cost. In such cases, a
simple approximation which works quite well is to first minimize
the data encoding cost for
H
v
by picking parameter
values for
v
that maximize the probability of the observed
data (the "maximum likelihood" or ML parameters), thus
fix
v
, then evaluate
KL(
u
;
v
).
(As an example, if a coin tossed
n times turns up heads
k times, the ML parameter for bias is simply k/n, but if
a uniform
u
=
U(0, 1) is chosen, the mean of
v
shifts
slightly to (
k + 1)/(n + 2) which is a negligible change for
moderately large
k and n.)
Non-parametric evaluation of the KL distance is complicated
, and often entails density estimates. We experimented
with two parametric distributions: the Gaussian and exponential
distributions for which the KL distance has closed
form expressions. We finally picked the exponential distribution
because it fit the observed hub score distribution more
closely.
If represents an exponential distribution with mean
and probability density
f(x) = (1/) exp(-x/), then
KL(
1
;
2
) = log
2
1
+
1
2
- 1 ,
(5)
where
i
corresponds to
i
(
i = 1, 2).
The next issue is how to measure data encoding cost
for continuous variables. There is a notion of the relative
entropy of a continuous distribution which generalizes discrete
entropy, but the relative entropy can be negative and
is useful primarily for comparing the information content in
two signal sources. Therefore we need to discretize the hub
scores.
A common approach to discretizing real values is to scale
the smallest value to one, in effect allocating log(
h
max
/h
min
)
bits per value. This poses a problem in our case. Consider
the larger graph in figure 5.
If h is initialized to
(1
, 1, 1, 1, 1)
T
, after the first few multiplications by
EE
T
which represents the linear transformation
(
h(1), . . . , h(5))
T
(h(1) + h(3), 0, h(1) + 2h(3), h(4), 0)
T
,
we get (2
, 0, 3, 1, 0)
T
, (5
, 0, 8, 1, 0)
T
, (13
, 0, 21, 1, 0)
T
, and
(34
, 0, 55, 1, 0)
T
. Even if we disregard the zeroes, the ratio
of the largest to the smallest positive component of h grows
without bound. As scaling is employed to prevent overflow,
h(4) decays towards zero. This makes the log(h
max
/h
min
)
strategy useless.
216
A reasonable compromise is possible by noting that the
user is not interested in the precision of all the hub scores.
E.g., reporting the top
fraction of positive hub scores to
within a small multiplicative error of
is quite enough. We
used
= 0.8 and = 0.05 in our experiments.
4.3
Distillation using segmented hubs
In this section we will embed the segmentation algorithm
discussed in the previous section into the edge-weighted B&H
algorithm. (Unlike the full B&H algorithm, we do no text
analysis at this stage. We continue to call the edge-weighted
version of HITS as "B&H" for simplicity.)
The main modification will be the insertion of a call
to the segmentation algorithm after the h
Ea step and
before the complementary step a
E
T
h.
It is also a
reasonable assumption that the best frontier will segment
each hub non-trivially, i.e., below its DOMroot. Therefore
we can invoke the segmentation routine separately on each
page. Let the segmentation algorithm described previously
be invoked as
F segment(u)
where
u is the DOMtree root of a page and F is the returned
frontier for that page.
Here is the pseudo-code for one
iteration:
h
Ea
for each document DOMroot
u
F segment(u)
for each frontier node
v F
h(v)
wL
v
h(w)
for each
w L
v
h(w) h(v)
reset
h(v) 0
a
E
T
h
normalize a so that
u
a(u) = 1.
For convenience we can skip the hub normalization and only
normalize authorities every complete cycle; this does not
affect ranking.
The reader will observe that this is not a linear relaxation
as was the case with HITS, Clever, or B&H, because
segment may lead us to aggregate and redistribute different
sets of hub scores in different iterations, based on the current
leaf hub scores. (Also note that if
F were fixed for each page
for all time, the system would still be linear and therefore
guaranteed to converge.) Although convergence results for
non-linear dynamical systems are rare [10], in our experiments
we never found convergence to be a problem (see
5).
However, we do have to take care with the initial values
of a and h, unlike in the linear relaxation situation where
any positive value will do. Assume that the first iteration
step transfers weights from authorities to hubs, and consider
how we can initialize the authority scores. In contrast to
HITS, we cannot start with all
a(v) = 1. Why not? Because
both good and bad authorities will get this score, resulting in
many hub DOMsubtrees looking more uniformly promising
than they should. This will lead the segment algorithm
to prune the frontier too eagerly, resulting in potentially
excessive authority diffusion, as in HITS.
We propose a more conservative initialization policy. Similar
to B&H, we assume that the textual content of the rootset
documents returned by the text search engine is more
reliably relevant than the radius-1 neighbors included for
distillation. Therefore we start our algorithm by assigning
only root-set authority scores to one. Of course, once the
iterations start, this does not prevent authority from diffusing
over to siblings, but the diffusion is controlled by hub
segmentation.
There is one other way in which we bias our algorithm
to be conservative w.r.t. authority diffusion. If a DOMnode
has only one child with a positive hub score, or if there is a
tie in the cost of expanding vs. pruning, we expand the node,
thereby pushing the frontier down and preventing the leaf
hub score from spreading out to possibly irrelevant outlinks.
Taken together, these two policies may be a little too
conservative, sometimes preventing desirable authority diffusion
and bringing our algorithm closer to MicroHITS than
we would like. For example, the graph being distilled may
be such that page
u has one DOMsubtree clearly (to a
human reading the text) dedicated to motorcycles, but only
one link target
v is in the expanded set. In ongoing work
we are integrating text analysis into our fine-grained model
to avoid such pitfalls [7].
Experiments and results
We used the 28 queries used in the Clever studies [5, 6] and
by B&H [2] (shown in Figure 7). For each, RagingSearch
returned at most 500 responses in the root set. These 500
28 pages were fetched and all their outlinks included in our
database as well. RagingSearch and HotBot were used to
get as many inlinks to the root set as possible; these were
also included in our database. This resulted in about 488000
raw URLs.
After normalizing URLs and eliminating duplicates, ap-proximately
366000 page fetches succeeded. We used the
w3c command-line page fetching tool from http://www.w3c.
org for its reliable timeout mechanism. We then scanned
all these pages and filled a global (macro-)link table with
2105271 non-local links, i.e., links between pages not on the
same hostname (as a lowercase string without port number).
We then proceeded to parse the documents into their
DOMs in order to populate a different set of tables that
represented the DOMnodes and the micro-links between
them. We used the javax.swing.text.html.parser package
and built a custom pared-down DOMgenerator on top of
the SAX scanner provided. The total number of micro-links
was 9838653, and the total number of micro-nodes likewise
increased.
Out of the two million non-local links, less than 1% had
targets that were not the root of the DOMtree of a page.
Thus our introduction of the asymmetry in handling hubs
and authorities seems to be not a great distortion of reality.
Even though our experiments were performed on a 700
MHz Pentium Xeon processor with 512 MB RAM and 60 GB
of disk, handling this scale of operation required some care
and compromise. In particular, to cut down the micro-graph
to only about 10 million edges, we deleted all DOMpaths
that did not lead to an <A...>...</A> element. Otherwise,
we estimated that the number of micro-links would be at
least two orders of magnitude larger
2
.
2
In our ongoing work we are having to address this issue as we are
also analyzing text.
217
#
Query
Drift
Mixed
1
``affirmative action''
large
2
alcoholism
3
``amusement park*''
small
4
architecture
5
bicycling
6
blues
7
``classical guitar''
small
8
cheese
9
cruises
10
``computer vision''
11
``field hockey''
12
gardening
13
``graphic design''
large
14
``Gulf war''
large
15
HIV
16
``lyme disease''
small
17
``mutual fund*''
small
18
``parallel architecture''
19
``rock climbing''
large
20
+recycling +can*
21
+stamp +collecting
22
Shakespeare
23
sushi
small
24
telecommuting
large
25
+Thailand +tourism
large
26
``table tennis''
small
27
``vintage cars''
small
28
+Zen +buddhism
large
Figure 7: The set of 28 broad queries used for comparing B&H
(without text analysis) and our system.The second column shows
the extent of drift in the B&H response.The third column shows
if mixed hubs were found within the top 50 hubs reported.
Figure 7 shows the 28 queries used by the Clever study
and by B&H. As indicated before, our baseline was B&H
with edge-weighting but without text-based outlier elimination
, which we will simply call "B&H". We did not have any
arbitrary cut-off for the number of in-links used as we did not
know which to discard and which to keep. As B&H noted,
edge-weighting improved results significantly, but without
text analysis is not adequate to prevent drift. Of the 28
queries, half show drift to some extent. We discuss a few
cases.
"Affirmative action" is understandably dominated by
lists of US universities because they publicize their support
for the same. Less intuitive was the drift into the world
of software, until we found http://206.243.171.145/7927.
html in the root set which presents a dialup networking software
called Affirmative Action, and links to many popular
freeware sites (figure 8).
By itself, this page would not
survive the link-based ranking, but the clique of software
sites leads B&H astray.
Another example was "amusement parks" where B&H
fell prey to multi-host nepotism in spite of edge-weighting. A
densely connected conglomerate including the relevant starting
point http://www.411fun.com/THEMEPARKS/ (figure 9)
formed a multi-site nepotistic cluster and misled macroscopic
algorithms.
In both these cases there were ample clues in the DOM
structure alone (leave alone text) that authority diffusion
should be suppressed. We obtained several cases of reduced
drift using our technique. (In ongoing work we are getting
the improvement evaluated by volunteers.) One striking
example was for the query "amusement parks" where our
algorithm prevented http://www.411... from taking over
the show (see figure 10; complete results are in AP-macro.
html and AP-micro.html).
Figure 8: The part of this HTML page that contains the query
affirmative action is not very popular, but adjoining DOM
subtrees (upper right corner) create a dense network of software
sites and mislead macroscopic distillation algorithms.Dotted red
lines are drawn by hand.
Figure 9: The 411 "clique attack" comprises a set of sibling sites
with different hostnames and a wide variety of topics linking to
each other.A human can easily avoid paying attention to the
sibling sites but macroscopic distillation will get misled.Dotted
red lines are drawn by hand.
Figure 7 also shows that for almost half the queries, we
found excellent examples of mixed hubs within the top 50
hubs reported. Given the abundance of hubs on these topics,
we had anticipated that the best hubs would be "pure".
While this was to some extent true, we found quite a few
mixed hubs too. Our system automatically highlighted the
most relevant DOMsubtree; we present some examples in
figure 11 and urge the reader to sample the annotated hubs
packaged with the HTML version of this paper.
Macroscopic
Fine-grained
http://www.411boating.com
http://www.411jobs.com
http://www.411insure.com
http://www.411hitech.com
http://www.411freestuff.com
http://www.411commerce.com
http://www.411-realestate.com
http://www.411worldtravel.com
http://www.411worldsports.com
http://www.411photography.com
http://www.kennywood.com
http://www.beachboardwalk.com
http://www.sixflags.com
http://www.cedarpoint.com
http://www.pgathrills.com
http://www.pki.com
http://www.valleyfair.com
http://www.silverwood4fun.com
http://www.knotts.com
http://www.thegreatescape.com
http://www.dutchwonderland.com
Figure 10: The fine-grained algorithm is less susceptible to clique
attacks.The query here is amusement parks.
218
Figure 11: Two samples of mixed hub annotations: amusement
parks amidst roller-coaster manufacturers and sushi amidst
international cuisine.
Query
Annotated file
alcoholism
AL1.html
Amusement parks
AP1.html
Architecture
AR1.html
Classical guitar
CG1.html
HIV
HI1.html
Shakespeare
SH1.html
Sushi
SU1.html
We verified that our smoothing algorithm was performing
non-trivial work: it was not merely locating top-scoring
authorities and highlighting them. Within the highlighted
regions, we typically found as many unvisited links as links
already rated as authorities. In ongoing work we are using
these new links for enhanced focused crawling.
A key concern for us was whether the smoothing iterations
will converge or not.
Because the sites of hub
aggregation are data-dependent, the transform was non-linear
, and we could not give a proof of convergence. In
practice we faced no problems with convergence; figure 12
is typical of all queries.
This raised another concern: was the smoothing subroutine
doing anything dynamic and useful, or was convergence
due to its picking the same sites for hub aggregation every
time? In figures 13 and 14 we plot relative numbers of
nodes pruned vs. expanded against the number of iterations
. Queries which do not have a tendency to drift look
like figure 13. Initially, both numbers are small. As the
system bootstraps into controlled authority diffusion, more
candidate hubs are pruned, i.e., accepted in their entirety.
Diffused authority scores in turn lead to fewer nodes get-1
.00E-07
1.00E-06
1.00E-05
1.00E-04
1.00E-03
1.00E-02
1.00E-01
0
2
4
6
8
10
Iterations
M
e
an
aut
h
s
c
o
r
e
c
h
ang
e
Figure 12: In spite of the non-linear nature of our relaxation
algorithm, convergence is quick in practice.A typical chart of
average change to authority scores is shown against successive
iterations.
ting expanded. For queries with a strong tendency to drift
(figure 14), the number of nodes expanded does not drop
as low as in low-drift situations. For all the 28 queries, the
respective counts stabilize within 1020 iterations.
0
500
1000
1500
2000
2500
3000
3500
4000
0
1
2
3
4
5
6
7
8
9 10
#Prune
#Expand
Data
Figure 13:
Our micro-hub smoothing technique is highly
adaptive: the number of nodes pruned vs.expanded changes
dramatically across iterations, but stabilizes within 1020 iterations
.There is also a controlled induction of new nodes into
the response set owing to authority diffusion via relevant DOM
subtrees (query: bicycling).
0
200
400
600
800
1000
1200
0
1
2
3
4
5
6
7
8
9 10
#Prune
#Expand
Data
Figure 14: For some queries for which B&H showed high drift,
our algorithm continues to expand a relatively larger number of
nodes in an attempt to suppress drift (query: affirmative action).
Finally, we checked how close we were to B&H ranking.
We expected our ranking to be correlated with theirs, but
verified that there are meaningful exceptions. Figure 15
show a scatter plot of authority scores. It illustrates that
we systematically under-rate authorities compared to B&H
(the axes have incomparable scale; the leading straight line
should be interpreted as
y = x). This is a natural outcome
of eliminating pseudo-authorities that gain prominence in
B&H via mixed hubs.
219
0
0.005
0.01
0.015
0.02
0.025
0
0.002
0.004
0.006 0.008
0.01
0.012
Authority score B&H
O
u
r
a
ut
h
o
ri
t
y
s
c
ore
Figure 15: Our ranking is correlated to B&H, but not identical;
we tend to systematically under-rate authorities compared to
B&H.
Conclusion and future work
We have presented a fine-grained approach to topic distillation
that integrates document substructure (in the form
of the Document Object Model) with regular hyperlinks.
Plugging in the fine-grained graph in place of the usual
coarse-grained graph does not work because the fine-grained
graph may not have the bipartite cores so vital to the success
of macroscopic distillation algorithms. We propose a new
technique for aggregating and propagating micro-hub scores
at a level determined by the Minimum Description Length
principle applied to the DOMtree with hub scores at the
leaves. We show that the resulting procedure still converges
in practice, reduces drift, and is moreover capable of identifying
and extracting regions (DOMsubtrees) relevant to
the query out of a broader hub or a hub with additional
less-relevant contents and links.
In ongoing work, apart from completing a detailed user
study (as in the Clever project), we are exploring three more
ideas. First, our algorithm depends on DOMbranch points
to be able to separate relevant hub fragments from irrelevant
ones. We have seen some pages with a long sequence of
URLs without any helpful DOMstructure such as <LI>
providing natural segment candidates. Second, we need to
bring back some of the text analysis techniques that have
improved HITS and integrate them with our model. Third,
we are measuring if the link localization done by our system
can help in faster resource discovery.
Acknowledgment: Thanks to Vivek Tawde and Hrishikesh
Gupta for helpful discussions, to S. Sudarshan for stimulating
discussions and generous support from the Informatics
Lab, IIT Bombay, and the anonymous reviewers for helping
to improve the presentation.
References
[1] E.Amitay and C.Paris. Automatically summarising web
sites: Is there a way around it?
In 9th International
Conference on Information and Knowledge Management
(CIKM 2000), Washington, DC, USA, 2000.ACM. Online
at http://www.mri.mq.edu.au/~einat/publications/
cikm2000.pdf.
[2] K.Bharat and M.Henzinger. Improved algorithms for
topic distillation in a hyperlinked environment.In 21st
International ACM SIGIR Conference on Research and
Development in Information Retrieval, pages 104111, Aug.
1998.Online at ftp://ftp.digital.com/pub/DEC/SRC/
publications/monika/sigir98.pdf.
[3] S.Brin and L.Page. The anatomy of a large-scale
hypertextual web search engine.In Proceedings of the 7th
World-Wide Web Conference (WWW7), 1998.Online at
http://decweb.ethz.ch/WWW7/1921/com1921.htm.
[4] O.Buyukkokten, H.Garcia-Molina, and A.Paepcke.
Focused web searching with PDAs.In World Wide Web
Conference, Amsterdam, May 2000.Online at http://www9.
org/w9cdrom/195/195.html.
[5] S.Chakrabarti, B.Dom, D.Gibson, J.Kleinberg, P.Raghavan
, and S.Rajagopalan. Automatic resource compilation
by analyzing hyperlink structure and associated text.In
7th World-wide web conference (WWW7), 1998.Online
at http://www7.scu.edu.au/programme/fullpapers/1898/
com1898.html.
[6] S.Chakrabarti, B.E.Dom, S.Ravi Kumar, P.Raghavan,
S.Rajagopalan, A.Tomkins, D.Gibson, and J.Kleinberg.
Mining the Web's link structure. IEEE Computer, 32(8):60
67, Aug.1999.
[7] S.Chakrabarti, M.Joshi, and V.Tawde. Enhanced
topic distillation using text, markup tags, and hyperlinks.
Submitted for publication, Jan.2001.
[8] S.Chakrabarti, M.van den Berg, and B.Dom. Focused
crawling: a new approach to topic-specific web resource
discovery. Computer Networks, 31:16231640, 1999.First
appeared in the 8th International World Wide Web Conference
, Toronto, May 1999.Available online at http://www8.
org/w8-papers/5a-search-query/crawling/index.html.
[9] T.M.Cover and J.A.Thomas. Elements of Information
Theory.John Wiley and Sons, Inc., 1991.
[10] D.A.Gibson, J.M.Kleinberg, and P.Raghavan.Clustering
categorical data: An approach based on dynamical systems.
In VLDB, volume 24, pages 311322, New York, Aug.1998.
[11] G.H.Golub and C.F.van Loan. Matrix Computations.
Johns Hopkins University Press, London, 1989.
[12] M.Hearst. Multi-paragraph segmentation of expository
text.In Proceedings of the 32nd Annual Meeting of the
Association for Computational Linguistics, Las Cruces, NM,
June 1994.Online at http://www.sims.berkeley.edu/
~hearst/publications.shtml.
[13] D.S.Johnson and K.A.Niemi. On knapsacks, partitions,
and a new dynamic programming technique for trees.
Mathematics of Operations Research, 8(1):114, 1983.
[14] J.Kleinberg. Authoritative sources in a hyperlinked
environment.In ACM-SIAM Symposium on Discrete
Algorithms, 1998.Online at http://www.cs.cornell.edu/
home/kleinber/auth.ps.
[15] R.Lempel and S.Moran. The stochastic approach for link-structure
analysis (SALSA) and the TKC effect.In WWW9,
pages 387401, Amsterdam, May 2000.Online at http://
www9.org/w9cdrom/175/175.html.
[16] J.Rissanen. Stochastic complexity in statistical inquiry. In
World Scientific Series in Computer Science, volume 15.
World Scientific, Singapore, 1989.
[17] G.Salton and M.J.McGill. Introduction to Modern
Information Retrieval.McGraw-Hill, 1983.
[18] S.Sarawagi. Explaining differences in multidimensional
aggregates.In International Conference on Very Large
Databases (VLDB), volume 25, 1999.Online at http:
//www.it.iitb.ernet.in/~sunita/papers/vldb99.pdf.
220 | PageRank algorithm;segmentation;HITS;link localization;Topic distillation;DOM;Document Object Model;XML;microscopic distillation;text analysis;Minimum Description Length principle;Google;hub fragmentation;hyperlink;topic distillation |
115 | Integration of Information Assurance and Security into the IT2005 Model Curriculum | In this paper we present the context of the work of the Curriculum Committee on IT2005, the IT curriculum volume described in the Overview Draft document of the Joint Task Force for Computing Curriculum 2004. We also provide a brief introduction to the history and work of the Information Assurance Education community. These two perspectives provide the foundation for the main thrust of the paper, which is a description of the Information Assurance and Security (IAS) component of the IT2005 document. Finally, we end the paper with an example of how IAS is being implemented at BYU as a "pervasive theme" that is woven throughout the curriculum and conclude with some observations about the first year's experience. | INTRODUCTION
In December 2001 a meeting (CITC-1) of interested parties from
fifteen four-year IT programs from the US along with
representatives from IEEE, ACM, and ABET began work on the
formalization of Information Technology as an accredited
academic discipline. The effort has evolved into SIGITE, the
ACM SIG for Information Technology Education. During this
evolution three main efforts have proceeded in parallel: 1)
Definition of accreditation standards for IT programs, 2) Creation
of a model curriculum for four-year IT programs, and 3)
Description of the characteristics that distinguish IT programs
from the sister disciplines in computing.
One of the biggest challenges during the creation of the model
curriculum was understanding and presenting the knowledge area
that was originally called "security". Some of us were
uncomfortable with the term because it was not broad enough to
cover the range of concepts that we felt needed to be covered. We
became aware of a community that had resolved many of the
issues associated with the broader context we were seeking,
Information Assurance. Information assurance has been defined
as "a set of measures intended to protect and defend information
and information systems by ensuring their availability, integrity,
authentication, confidentiality, and non-repudiation. This includes
providing for restoration of information systems by incorporating
protection, detection, and reaction capabilities." The IA
community and work done by IA educators became useful in
defining requisite security knowledge for information technology
education programs.
We believe that the Information Technology and the Information
Assurance Education communities have much to share. At the 9
th
Colloquium for Information System Security Education in Atlanta
we introduced CC2005 and IT2005 to the IA Education
community[1]. In the current paper we introduce the history and
current state of IA education to the SIGITE community. In
addition, we demonstrate how significant concepts from the
Information Assurance community have been integrated into
IT2005.
1.1 CC2005 and IT2005
In the first week of December of 2001 representatives from 15
undergraduate information technology (IT) programs from across
the country gathered together near Provo, Utah, to develop a
community and begin to establish academic standards for this
rapidly growing discipline. This first Conference on Information
Technology Curriculum (CITC-1) was also attended by
representatives from two professional societies, the Association
for Computing Machinery (ACM) and the Institute of Electrical
and Electronics Engineers, Inc. (IEEE), and also the Accreditation
Board for Engineering and Technology, Inc. (ABET). This
invitational conference was the culmination of an effort begun
several months earlier by five of these universities who had
formed a steering committee to organize a response from existing
IT programs to several initiatives to define the academic discipline
of IT. The steering committee wanted to ensure that the input of
existing programs played a significant role in the definition of the
field.
A formal society and three main committees were formed by the
attendees of CITC-1. The society was the Society for Information
Technology Education (SITE); one of the committees formed was
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
SIGITE'05, October 20-22, 2005, Newark, New Jersey, USA.
Copyright 2005 ACM
1-59593-252-6/05/0010...$5.00.
7
the executive board for SITE, composed of a president, vice-president
, secretary, treasurer, regional representatives, and an
activities chairperson. The other two committees formed were the
IT Curriculum Committee, including subcommittees for 4-year
and 2-year programs, and the IT Accreditation Committee, also
including subcommittees for 4-year and 2-year programs.
The development of IT as an academic discipline is similar to the
process that computer science (CS) went through in the 70's and
80's. In fact, looking at the placement of CS programs in academic
institutions around the U.S. illustrates the debate that swirled
around the discipline as its core was being defined. Some CS
programs are in departments of mathematics, others are in
engineering schools, and many others have become mainstay
programs within newly emerging colleges of computing.
Information technology, as it is practiced at this moment in its
evolution, reflects similar growing pains. IT programs exist in
colleges of computing, in CS departments, in schools of
technology, and in business schools. Professors of information
technology possess degrees in information systems, electronics,
communications, graphics arts, economics, mathematics,
computer science, and other disciplines. Few to none of them
have a degree in information technology.
It should be acknowledged here that IT has two substantially
different interpretations, and that these should be clarified.
Information Technology (IT) in its broadest sense encompasses all
aspects of computing technology. IT, as an academic discipline,
focuses on meeting the needs of users within an organizational
and societal context through the selection, creation, application,
integration and administration of computing technologies. A
more detailed history of SIGITE is available in [2].
SIGITE is directly involved with the Joint Task Force for
Computing Curriculum 2004 and has 2 representatives on the task
force. This task force is a continuation of the effort that created
CC2001
[3]
the current computer science curriculum standard.
CC2001 has been relabeled CS2001 and the current draft of the
CC2004 Overview document [4] presents the structure being used
to describe computing and its sub-disciplines (See Figure 1). The
SIGITE Curriculum Committee is responsible for IT2005, the
Information Technology Curriculum Volume. IT2005 was made
available for comment in mid 2005.
Figure 1
1.2 Information Assurance Education
Information assurance has been defined as "a set of measures
intended to protect and defend information and information
systems by ensuring their availability, integrity, authentication,
confidentiality, and non-repudiation. This includes providing for
restoration of information systems by incorporating protection,
detection, and reaction capabilities." (National Security Agency,
http://www.nsa.gov/ia/iaFAQ.cfm?MenuID=10#1).[5]
Information assurance education, then, includes all efforts to
prepare a workforce with the needed knowledge, skills, and
abilities to assure our information systems, especially critical
national security systems. Information assurance education has
been growing in importance and activity for the past two decades.
A brief look at the involved entities and history will shed light on
the growth.
The National Information Assurance Education and Training
Partnership (NIETP) program is a partnership among government,
academia and industry focused on advancing information
assurance education, training, and awareness. The NIETP was
initiated in 1990 under National Security Directive 42 and has
since been reauthorized several times. The NIETP serves in the
capacity of national manager for information assurance education
and training related to national security systems and coordinates
this effort with the Committee on National Security Systems
(CNSS). "The CNSS provides a forum for the discussion of
policy issues, sets national policy, and promulgates direction,
operational procedures, and guidance for the security of national
security systems. National security systems are information
systems operated by the U.S. Government, its contractors or
agents that contain classified information or that:
1. involve intelligence activities;
2. involve cryptographic activities related to national security;
3. involve command and control of military forces;
4. involve equipment that is an integral part of a weapon or
weapons system(s); or
5. are critical to the direct fulfillment of military or intelligence
missions (not including routine administrative and business
applications)." http://www.cnss.gov/history.html[6]
CNSS is responsible for the development of principles, policies,
guidelines, and standards that concern systems holding or related
to national security information. Education and training standards
are among the many standards and guidelines that CNSS issues.
The training/education standards issued to date include: a)
NSTISSI
1
4011 The National Training Standard for Information
Systems Security Professionals, b) CNSSI 4012 The National
Information Assurance Training Standard for Senior Systems
Managers, c) CNSSI 4013 The National Information Assurance
Training Standard for System Administrators, d) CNSSI 4014 -
Information Assurance Training Standard for Information Systems
Security Officers, and e) NSTISSI 4015 The National Training
Standard for Systems Certifiers. CNSSI 4016 The National
1
Under Executive Order (E.O.) 13231 of October 16, 2001, Critical
Infrastructure Protection in the Information Age, the President
redesigned the National Security Telecommunications and Information
Systems Security Committee (NSTISSC) as the Committee on National
Security Systems (CNSS)
8
Training Standard for Information Security Risk Analysts will be
released soon.
The NSTISSI-CNSSI standards referenced above have been used
to develop in-service training and education opportunities for
enlisted and civilian employees in an effort to assure quality
preparation of professionals entrusted with securing our critical
information. In addition to providing a basis for in-service
education and training, the NSTISSI-CNSSI standards have also
been deployed to colleges and universities in an effort to also
prepare qualified individuals preservice. The most significant
effort to involve colleges and universities has been through the
National Centers of Academic Excellence in Information
Assurance Education (CAEIAE) Program. The CAEIAE program
was started in 1998 by the National Security Agency (NSA) and is
now jointly sponsored by the NSA and the Department of
Homeland Security (DHS) in support of the President's National
Strategy to Secure Cyberspace, February 2003. The purpose of the
program is to recognize colleges and universities for their efforts
in information assurance education and also to encourage more
colleges and universities to develop courses and programs of
study in information assurance. In order to be eligible to apply
for CAEIAE certification, an institution must first demonstrate
that it teaches the content covered in NSTISSI 4011 - The
National Training Standard for Information Systems Security
Professionals. Once an institution has been 4011 certified, it is
eligible to apply for CAEIAE status. Criteria for becoming a
CAEIAE include the following: a) evidence of partnerships in IA
education, b) IA must be treated as a multidisciplinary science, c)
evidence that the university encourages the practice of
information assurance in its operations, d) demonstration of
information assurance research, e) demonstration that the IA
curriculum reaches beyond physical geographic borders, f)
evidence of faculty productivity in information assurance research
and scholarship, g) demonstration of state of the art information
assurance resources, h) a declared concentration(s) in information
assurance, i) a university recognized center in information
assurance, and j) dedicated information assurance faculty
(http://www.nsa.gov/ia/academia/caeCriteria.cfm?MenuID=10.1.1
.2).[7]
In 1999, there were seven institutions designated as the inaugural
CAEIAE schools. The certification is good for three years at
which time institutions can reapply. Annually, an additional 6-10
institutions are awarded the certification; today, there are more
than 60 CAEIAE institutions. The types of institutions and
programs that are applying and being certified are growing not
just in number, but also in diversity. In the first round of
certification, the institutions were largely research institutions and
their respective programs were at the graduate level in computer
science. Today, institutions are certifying courses at the
undergraduate level in computer science, management
information systems, and information technology. The work
being done by SIGITE is important to the further expansion of
information assurance education as information assurance
expands beyond the development of information systems to
include the entire system life cycle including deployment,
operation, maintenance, a retirement of such systems.
INFORMATION ASSURANCE IN IT2005
The IT2005 volume is modeled on CS2001. It consists of 12
chapters and 2 appendices. The current draft resides at
http://sigite.acm.org/activities/curriculum/[8]
Chapter 1. Introduction
Chapter 2. Lessons from Past Reports
Chapter 3. Changes in the Information Technology
Discipline
Chapter 4. Principles
Chapter 5. Overview of the IT Body of Knowledge
Chapter 6. Overview of the Curricular Models
Chapter 7. The Core of the Curriculum
Chapter 8. Completing the Curriculum
Chapter 9. Professional Practice
Chapter 10. Characteristics of IT Graduates
Chapter 11. Computing across the Curriculum
Chapter 12. Institutional Challenges
Acknowledgements
Bibliography
Appendix A. The IT Body of Knowledge
Appendix B. IT Course Descriptions
Chapters 5 and 7 are of particular interest for this discussion.
Chapter 5 is an overview of the IT body of knowledge. A
summary is included as Appendix A. Chapter 7 discusses the
relationship of the core topics described in the body of knowledge
to IT curriculum. IAS is explicitly mentioned in three contexts:
Section 7.2 as part of the IT Fundamentals Knowledge
Area (KA)
Section 7.2 as a "pervasive theme"
Section 7.4 as a KA that integrates the IAS concepts for
students ready to graduate.
IAS is the only area that is an IT Fundamental, a "pervasive
theme" and also a complete KA with a recommended senior level
course for integrating all of the concepts. Clearly, IT2005
presents Information Assurance and Security as a core
competency required by every graduate of an IT program.
During the early analysis of IT as an academic discipline, Delphi
studies were performed that ranked "Security" as a central area for
IT. [1]
As we studied the issues several members of the
committees involved were uncomfortable with "security" as the
name for the knowledge area. The name seemed too restrictive.
At the annual SIGITE conference in 2003 two of the authors were
introduced to the other author and the Center for Research and
Education in Information Assurance and Security (Cerias) at
Purdue. The BYU faculty was dissatisfied with the security
component in the IT curriculum and the SIGITE curriculum
committee was struggling with the Security KA for IT2005.
Through flyers at the conference we became aware of the
Information Assurance Education Graduate Certificate (IAEGC)
program funded by the NSA. With encouragement from
colleagues and the administration of the School of Technology,
the primary author attended the 2004 program. The experience
has had a significant impact on IT2005 and the BYU curriculum.
9
We discovered that NSA had begun to use the umbrella term
Information Assurance [9] to cover what we were calling security.
Even though this term is defined to cover exactly what the IT
community meant by security, the use of the terminology elicited
a lot of blank stares. We found that explicitly adding security to
the name of the knowledge area eliminated much of the confusion.
We are indebted to the Center for Education and Research for
Information Assurance and Security (CERIAS)[10] at Purdue
whose name provided the inspiration to use IAS as a name for the
knowledge area.
Once the naming issue was resolved, the SIGITE curriculum
committee struggled to find a model for IAS that could
be understood by freshman IT students
provide a framework to integrate IAS concepts that are
integrated into nearly all of the other KAs
be rich enough to support a senior level course that ties
everything together.
When A Model for Information Assurance: An Integrative
Approach [11] was discovered the writing committee achieved
consensus on a model. The cube (see Figure 2) provides a simple
visual representation that a freshman can understand, yet the 3
dimensional structure facilitates the detailed analysis required for
use in technology specific contexts, and is comprehensive enough
to encompass a capstone learning experience.
Figure 2
IT2005 uses this model to structure IAS concepts throughout the
document.
RECOMMENDATIONS FOR "PERVASIVE THEMES" IN IT2005
During the deliberations of the SIGITE Curriculum Committee,
several topics emerged that were considered essential, but that did
not seem to belong in a single specific knowledge area or unit.
These topics, referred to as pervasive themes, are:
1.
user advocacy
2.
information assurance and security
3.
ethics and professional responsibility
4.
the ability to manage complexity through: abstraction & modeling,
best practices, patterns, standards, and the use of appropriate tools
5. a deep understanding of information and communication
technologies and their associated tools
6. adaptability
7.
life-long learning and professional development
8. interpersonal
skills
The committee states "that these topics are best addressed
multiple times in multiple classes, beginning in the IT
fundamentals class and woven like threads throughout the tapestry
of the IT curriculum"[12].
These themes need to be made explicit in the minds of the
students and the faculty. The themes touch many of the topics
throughout the curriculum. Every time a new technology is
announced in the media, an instructor has the opportunity to drive
home the importance of "life-long learning". Every time there is a
cyber-crime in the media we have the opportunity to discuss the
ethical and professional ramifications. It is recommended that an
IT Fundamentals course be taught early in the curriculum where
all of these themes are introduced and discussed as concepts that
touch everything an IT professional does.
Each of these topics deserves a full treatment; however, for the
purposes of this paper we will focus on IAS, possibly the most
pervasive theme. We will address one approach to achieve
addressing IAS "multiple times in multiple classes" in section 6
below.
THE INFORMATION ASSURANCE AND SECURITY KNOWLEDGE AREA
In early 2003, the SIGITE curriculum committee divided into
working groups around the knowledge areas defined by [3] to
make an initial cut at the list of topics for each KA. A significant
revision was accomplished and reviewed by the participants at the
2004 IAEGC program at Purdue in August 2004. The list of areas
for the IAS KA was finalized in late 2004 at a full IT Curriculum
Committee meeting. The draft of the completed IAS KA was
completed in early Feb 2005 by the IAS working group, edited by
the writing committee in late Feb 2005 and was presented to the
full committee in April 2005.
Figure 3 is a list of the IAS KA and its areas. The basic structure
and vocabulary is derived directly from work done in the IA
community, specifically Maconachy, et. al.[11]. The number in
parenthesis is the number of lecture hours the committee thought
would be required to give an IT student minimum exposure to the
unit. It should be noted that the ordering of units in all of the
KAs, is first "Fundamentals", if there is one, and then the units are
sorted in order of the number of core hours. This ordering should
not be considered as any indication of the order the units would
be covered pedagogically in an implemented curriculum.
10
Figure 3
A summary of the IAS KA is in Appendix A, and a complete
treatment is found in IT2005 [4], including topics, core learning
outcomes, and example elective learning outcomes.
In reviewing this model curriculum for IAS in Information
Technology, it should be remembered that the core topics and
associated lecture hours are the minimum coverage that every IT
student in every program should receive. We would expect that
most institutions would provide additional instruction in
Information Assurance and Security according to the
strengths/areas of specialization in their programs of study.
IT AT BRIGHAM YOUNG UNIVERSITY
The Information Technology program at BYU began officially in
Fall 2001 with a faculty consisting of:
1. Two electronics engineering technology (EET) professors who
were instrumental in the evolution of the existing EET program at
BYU into an IT program,
2. One electrical engineering, Ph.D. newly arrived from the
aerospace industry.
3. One computer science instructor who had done part time
teaching and had been part of the department for 1 year with
several years in system development in health care.
4. One computer science Ph.D. with recent executive management
responsibilities in network hardware and service provider
businesses.
5. The former department chair of the technology education
program for secondary schools joined in 2002.
6. One computer science Ph. D. with extensive industry
experience in data privacy and IT management joined in 2004.
This is obviously a diverse group of people, each of whom joined
the department because they thought that the existing computing
programs at BYU did not offer students preparation for the
practical aspects of system delivery to customers. We are evenly
divided between long-term academics and recent `retreads' from
industry. However, the academics have also each had significant
industrial experience, which provided the motivation for them to
accept positions in the new IT program. The BYU curriculum
began as a traditional "stovepipe" approach of courses oriented
around topics like networking, databases, and operating systems
borrowed from CS, EET, CE and IS, and evolved to a more
integrated approach starting at the introductory levels so that
advanced topic oriented courses are more easily sequenced. We
have also discovered that the integrative nature of IT forces a
focus on the seams between technologies rather than
implementation of components. This fundamental difference in
focus is one of the primary differences that distinguishes IT from
other computing disciplines that focus on the design and
implementation of components[12] [13]. Over the last 4 years,
BYU faculty has participated actively in SIGITE and attempted to
share what has been learned with the emerging IT community.
[14] [15] [16]
The BYU curriculum has evolved into what IT2005 calls a
"core/integration first" approach [17]. Significant portions of the
introductory material in operating systems, databases, web
systems, networking had been moved to lower division courses by
early 2004. Much of the shift occurred when the introduction to
web systems was moved from the junior to the sophomore year
and introductory material sufficient to understand web systems
was included for networking, databases, operating system
administration and OS process models. The improvements in flow
and reduced redundancy have been noticeable in the upper
division core courses. Appendix B graphs the current BYU course
structure. In late 2004 and early 2005 we began implementing
the "pervasive theme" of IAS in earnest.
INTEGRATING IAS INTO THE EXISTING BYU CURRICULUM
A senior level IAS class had been introduced into the curriculum
in early 2004 and was made a requirement in 2005. However, we
recognized that simply adding a required course at the end of a
student's college experience would not be adequate. SIGITE
discussions had placed security in the pervasive theme category at
the very beginning, though the name of the KA wasn't chosen
until 2004. We were faced with the challenge of integrating the
IAS fundamentals into the introductory courses, morphing the
security modules in the existing classes to use the MSRW [11]
framework and bringing all of the students in the program up to
speed on the new framework simultaneously.
Our approach has been to prepare one hour modules on the
MSRW framework that can be used in an existing course to bring
students up to speed or taught in seminars as needed. We are in
the process of integrating the IAS Fundamentals into our
introductory courses. We successfully integrated the IAS modules
into the sophomore introduction to web-based systems course,
which was already introducing all of the major IT areas. The
course was modified to replace a 3 week team project experience
with a 2 week team oriented lab and then using the time for IAS
topics. Much remains to be done, but the initial experience is
positive. The faculty seems unified in their desire to implement
IAS as a pervasive theme. For example, 2 lecture and 2 lab hours
are now included in the computer communications course. 3
lecture hours and 3 lab hours were added to the web systems
course. The IAS component of the database course was
rearranged and strengthened with 1 lecture hour added. Similar
adjustments have been made throughout the curriculum.
IAS. Information Assurance and Security (23 core hours)
IAS1. Fundamental Aspects (3)
IAS2. Security Mechanisms (countermeasures) (5)
IAS3. Operational Issues (3)
IAS4. Policy (3)
IAS5. Attacks (2)
IAS6. Security Domains (2)
IAS7. Forensics (1)
IAS8. Information States (1)
IAS9. Security Services (1)
IAS10. Threat Analysis Model (1)
IAS11. Vulnerabilities (1)
11
In addition to improving the IAS component of the BYU
curriculum, we have done an analysis of our coverage of the
proposed IT2005 core. We have several adjustments in other
parts of our curriculum. Since we evolved from an EET program,
the hardware coverage was extremely strong. We are weak in the
coverage of systems and database administration. We will
continue to adjust our curriculum as IT matures as an academic
discipline.
SUMMARY
Information Technology is maturing rapidly as an academic
discipline. A public draft of the IT volume described in the
Computing Curriculum 2004 Overview is ready for review. The
SIGITE Curriculum Committee is soliciting feedback on the
document. This paper presents a brief history of SIGITE, the
ACM SIG for Information Technology Education, and a brief
introduction to the Information Assurance Education community.
The authors believe that collaboration between these communities
can be of benefit to all of the participants and the industry at large.
SIGITE and the CC 2005 Joint Task Force solicit feedback on the
documents at http://www.acm.org/education/ .
ACKNOWLEDGMENTS
The authors would like to thank the ACM Education committee
for their support of the IT2005 effort, especially Russ
Shackleford, without whose financial support and encouragement
the document would be years away from completion. We would
also like to express appreciation to the NSA for funding the
IAEGC[18] program. Corey Schou's IAEGC lecture on helping
students understand IAS in an hour was the genesis of the IAS
approach in IT2005. The BYU authors would like to express
appreciation to our colleagues and the administration of the
School of Technology at Brigham Young University, who covered
our classes and found the funding for the time and travel our
participation in the SIGITE curriculum committee required.
REFERENCES
[1] Ekstrom, Joseph J., Lunt, Barry M., Integration of Information
Assurance and Security into IT2005, 9
th
Colloquium for
Information Systems Security Education, June 6-9, 2005,
Atlanta, Georgia.
[2] Lunt, Barry M.; Ekstrom, Joseph J.; Lawson, Edith A.;
Kamali, Reza; Miller, Jacob; Gorka, Sandra; Reichgelt, Han;
"Defining the IT Curriculum: The Results of the Last 2
Years"; World Engineer's Convention 2004, Shanghai,
China; Nov 2-6, 2004
[3] Joint Task Force for Computing Curricula (2001), Computing
Curricula 2001, Computer Science Volume, December 15,
2001, Copyright 2001, ACM/IEEE
[4] ]Joint Task Force for Computing Curricula (2004), Computing
Curricula 2004: Overview Document,
http://www.acm.org/education/Overview_Draft_11-22-04
.pdf retrieved Mar. 2, 2005.
[5] http://www.nsa.gov/ia/iaFAQ.cfm?MenuID=10#1
[6] http://www.cnss.gov/history.html
[7]
http://www.nsa.gov/ia/academia/caeCriteria.cfm?MenuID=1
0.1.1.2
[8] SIGITE Curriculum Committee (2005), Computing
Curriculum 2005, IT Volume,
http://sigite.acm.org/activities/curriculum/
[9] NSA web site, Information Assurance Division;
http://www.nsa.gov/ia/ verified Mar, 4, 2005.
[10] Cerias web site, http://cerias.purdue.edu/; verified Mar 4,
2005
[11] Machonachy, W. Victor; Schou, Corey D.; Ragsdale,
Daniel; Welch , Don; "A model for Information Assurance:
An Integrated Approach", Proceedings of the 2001 IEEE
Workshop on Information Assurance and Security, United
States Military Academy, West Point , NY, 5-6 June 2001.
[12] Ekstrom, Joseph, Renshaw, Stephen, Curriculum and Issues
in a First Course of Computer Networking for Four-year
Information Technology Programs, ASEE 2002 Session
2793
[13] Ekstrom, Joseph, Renshaw, Stephen, A Project-Based
Introductory Curriculum in Networking, WEB and Database
Systems for 4-year Information Technology Programs, CITC
3 Rochester NY, September, 2002
[14] Ekstrom, Joseph, Renshaw, Stephen, Database Curriculum
Issues for Four-year IT Programs, CIEC 2003, Tucson, AZ,
January, 2003.
[15] Ekstrom, Joseph; Lunt, Barry; Education at the Seams:
Preparing Students to Stitch Systems Together; Curriculum
and Issues for 4-Year IT Programs, CITC IV Purdue
University, West Lafayette, Indiana, October 2003.
[16] Ekstrom, Joseph; Lunt, Barry M; Helps, C. Richard;
Education at the Seams: Preliminary Evaluation of Teaching
Integration as a Key to Education in Information
Technology; ASEE 2004, Salt Lake City, Utah, June 2004.
[17] Section 6.3 of ref [4].
[18] IAEGC, Information Assurance Education Graduate
Certificate, http://www.cerias.purdue.edu/iae Validated
April 13, 2005.
12
Appendix A
From IT2005 Mar 2005 Draft
The Information Technology Body of Knowledge
ITF. Information Technology Fundamentals (33 core)
ITF1. Pervasive Themes in IT (17)
ITF2. Organizational Issues (6)
ITF3. History of IT (3)
ITF4. IT and Its Related and Informing Disciplines (3)
ITF5. Application Domains (2)
ITF6. Applications of Math and Statistics to IT (2)
HCI. Human Computer Interaction (20 core hours)
HCI1. Human Factors (6)
HCI2. HCI Aspects of Application Domains (3)
HCI3. Human-Centered Evaluation (3)
HCI4. Developing Effective Interfaces (3)
HCI5. Accessibility (2)
HCI6. Emerging Technologies (2)
HCI7. Human-Centered Software (1)
IAS. Information Assurance and Security (23 core)
IAS1. Fundamental Aspects (3)
IAS2. Security Mechanisms (Countermeasures) (5)
IAS3. Operational Issues (3)
IAS4. Policy (3)
IAS5. Attacks (2)
IAS6. Security Domains (2)
IAS7. Forensics (1)
IAS8. Information States (1)
IAS9. Security Services (1)
IAS10. Threat Analysis Model (1)
IAS11. Vulnerabilities (1)
IM. Information Management (34 core hours)
IM1. IM Concepts and Fundamentals (8)
IM2. Database Query Languages (9)
IM3. Data Organization Architecture (7)
IM4. Data Modeling (6)
IM5. Managing the Database Environment (3)
IM6. Special-Purpose Databases (1)
IPT. Integrative Programming & Technologies (23 core)
IPT1. Intersystems Communications (5)
IPT2. Data Mapping and Exchange (4)
IPT3. Integrative Coding (4)
IPT4. Scripting Techniques (4)
IPT5. Software Security Practices (4)
IPT6. Miscellaneous Issues (1)
IPT7. Overview of programming languages (1)
NET. Networking (20 core hours)
NET1. Foundations of Networking (3).
NET2. Routing and Switching (8)
NET3. Physical Layer (6)
NET4. Security (2)
NET5. Application Areas (1)
NET6. Network Management
PF. Programming Fundamentals (38 core hours)
PF1. Fundamental Data Structures (10)
PF2. Fundamental Programming Constructs (9)
PF3. Object-Oriented Programming (9)
PF4. Algorithms and Problem-Solving (6)
PF5. Event-Driven Programming (3)
PF6. Recursion (1)
PT. Platform Technologies (14 core hours)
PT1. Operating Systems (10)
PT2. Architecture and Organization (3)
PT3. Computer Infrastructure (1)
PT4. Enterprise Deployment Software
PT5. Firmware
PT6. Hardware
SA. System Administration and Maintenance (11 core hours)
SA1. Operating Systems (4)
SA2. Applications (3)
SA3. Administrative Activities (2)
SA4. Administrative Domains (2)
SIA. System Integration and Architecture (21 core hours)
SIA1. Requirements (6)
SIA2. Acquisition/Sourcing (4)
SIA3. Integration (3)
SIA4. Project Management (3)
SIA5. Testing and QA (3)
SIA6. Organizational Context (1)
SIA7. Architecture (1)
SP. Social and Professional Issues (23 core hours)
SP1. Technical Writing for IT (5)
SP2. History of Computing (3)
SP3. Social Context of Computing (3)
SP4. Teamwork Concepts and Issues (3)
SP5. Intellectual Properties (2)
SP6. Legal Issues in Computing (2)
SP7. Organizational Context (2)
SP8. Professional and Ethical Issues and Responsibilities (2)
SP9. Privacy and Civil Liberties (1)
WS. Web Systems and Technologies (21 core hours)
WS1. Web Technologies (10)
WS2. Information Architecture (4)
WS3. Digital Media (3)
WS4. Web Development (3)
WS5. Vulnerabilities (1)
WS6. Social Software
Total Hours: 281
Notes:
1. Order of Knowledge Areas: Fundamentals first, then ordered alphabetically.
2. Order of Units under each Knowledge Area: Fundamentals first (if present),
then ordered by number of core hours.
13
Appendix B
14 | Information assurance;IT2005 volume;Pervasive Themes;BYU curriculum;NIETP Program;Training standards;In-service training development;Committee on National Security Systems;CITC-1;Information Technology;IT;CC2005;IA;SIGITE Curriculum committee;Education;IT2005;Security Knowledge;Information Assurance;IAS |
116 | Interactive Machine Learning | Perceptual user interfaces (PUIs) are an important part of ubiquitous computing. Creating such interfaces is difficult because of the image and signal processing knowledge required for creating classifiers. We propose an interactive machine-learning (IML) model that allows users to train, classify/view and correct the classifications. The concept and implementation details of IML are discussed and contrasted with classical machine learning models. Evaluations of two algorithms are also presented. We also briefly describe Image Processing with Crayons (Crayons), which is a tool for creating new camera-based interfaces using a simple painting metaphor. The Crayons tool embodies our notions of interactive machine learning. | INTRODUCTION
Perceptual user interfaces (PUIs) are establishing the need for
machine learning in interactive settings. PUIs like
VideoPlace [8], Light Widgets [3], and Light Table [15,16]
all use cameras as their perceptive medium. Other systems
use sensors other than cameras such as depth scanners and
infrared sensors [13,14,15]. All of these PUIs require
machine learning and computer vision techniques to create
some sort of a classifier. This classification component of the
UI often demands great effort and expense. Because most
developers have little knowledge on how to implement
recognition in their UIs this becomes problematic. Even
those who do have this knowledge would benefit if the
classifier building expense were lessened. We suggest the
way to decrease this expense is through the use of a visual
image classifier generator, which would allow developers to
add intelligence to interfaces without forcing additional
programming. Similar to how Visual Basic allows simple
and fast development, this tool would allow for fast
integration of recognition or perception into a UI.
Implementation of such a tool, however, poses many
problems. First and foremost is the problem of rapidly
creating a satisfactory classifier. The simple solution is to
using behind-the-scenes machine learning and image
processing.
Machine learning allows automatic creation of classifiers,
however, the classical models are generally slow to train, and
not interactive. The classical machine-learning (CML) model
is summarized in Figure 1. Prior to the training of the
classifier, features need to be selected. Training is then
performed "off-line" so that classification can be done
quickly and efficiently. In this model classification is
optimized at the expense of longer training time. Generally,
the classifier will run quickly so it can be done real-time. The
assumption is that training will be performed only once and
need not be interactive. Many machine-learning algorithms
are very sensitive to feature selection and suffer greatly if
there are very many features.
Feature
Selection
Train
Classify
Interactive Use
Figure 1 Classical machine learning model
With CML, it is infeasible to create an interactive tool to
create classifiers. CML requires the user to choose the
features and wait an extended amount of time for the
algorithm to train. The selection of features is very
problematic for most interface designers. If one is designing
an interactive technique involving laser spot tracking, most
designers understand that the spot is generally red. They are
not prepared to deal with how to sort out this spot from red
clothing, camera noise or a variety of other problems. There
are well-known image processing features for handling these
problems, but very few interface designers would know how
to carefully select them in a way that the machine learning
algorithms could handle.
The current approach requires too much technical knowledge
on the part of the interface designer. What we would like to
do is replace the classical machine-learning model with the
interactive model shown in Figure 2. This interactive training
allows the classifier to be coached along until the desired
results are met. In this model the designer is correcting and
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and
that copies bear this notice and the full citation on the first page. To
copy otherwise, or republish, to post on servers or to redistribute to
lists, requires prior specific permission and/or a fee.
IUI'03, January 1215, 2003, Miami, Florida, USA.
Copyright 2003 ACM 1-58113-586-6/03/0001...$5.00.
39
teaching the classifier and the classifier must perform the
appropriate feature selection.
Feature
Selection
Train
Classify
Interactive Use
Feedback To
Designer
Manual
Correction
Figure 2 Interactive machine learning (IML) model
The pre-selection of features can be eliminated and
transferred to the learning part of the IML if the learning
algorithm used performs feature selection. This means that a
large repository of features are initially calculated and fed to
the learning algorithm so it can learn the best features for the
classification problem at hand. The idea is to feed a very
large number of features into the classifier training and let the
classifier do the filtering rather than the human. The human
designer then is focused on rapidly creating training data that
will correct the errors of the classifier.
In classical machine learning, algorithms are evaluated on
their inductive power. That is, how well the algorithm will
perform on new data based on the extrapolations made on the
training data. Good inductive power requires careful analysis
and a great deal of computing time. This time is frequently
exponential in the number of features to be considered. We
believe that using the IML model a simple visual tool can be
designed to build classifiers quickly. We hypothesize that
when using the IML, having a very fast training algorithm is
more important than strong induction. In place of careful
analysis of many feature combinations we provide much
more human input to correct errors as they appear. This
allows the interactive cycle to be iterated quickly so it can be
done more frequently.
The remainder of the paper is as follows. The next section
briefly discusses the visual tool we created using the IML
model, called Image Processing with Crayons (Crayons).
This is done to show one application of the IML model's
power and versatility. Following the explanation of Crayons,
we explore the details of the IML model by examining its
distinction from CML, the problems it must overcome, and its
implementation details. Finally we present some results from
some tests between two of the implemented machine learning
algorithms. From these results we base some preliminary
conclusions of IML as it relates to Crayons.
IMAGE PROCESSING WITH CRAYONS
Crayons is a system we created that uses IML to create image
classifiers. Crayons is intended to aid UI designers who do
not have detailed knowledge of image processing and
machine learning. It is also intended to accelerate the efforts
of more knowledgeable programmers.
There are two primary goals for the Crayons tool: 1) to allow
the user to create an image/pixel classifier quickly, and 2) to
allow the user to focus on the classification problem rather
than image processing or algorithms. Crayons is successful if
it takes minutes rather than weeks or months to create an
effective classifier. For simplicity sake, we will refer to this
as the UI principle of fast and focused. This principle refers
to enabling the designer to quickly accomplish his/her task
while remaining focused solely on that task.
Figure 3 shows the Crayons design process. Images are input
into the Crayons system, which can then export the generated
classifier. It is assumed the user has already taken digital
pictures and saved them as files to import into the system, or
that a camera is set up on the machine running Crayons, so it
can capture images from it. Exporting the classifier is equally
trivial, since our implementation is written in Java. The
classifier object is simply serialized and output to a file using
the standard Java mechanisms.
Figure 3 Classifier Design Process
An overview of the internal architecture of Crayons is shown
in Figure 4. Crayons receives images upon which the user
does some manual classification, a classifier is created, then
feedback is displayed. The user can then refine the classifier
by adding more manual classification or, if the classifier is
satisfactory, the user can export the classifier. The internal
loop shown in Figure 4 directly correlates to the
aforementioned train, feedback, correct cycle of the IML (see
Figure 2). To accomplish the fast and focused UI principle,
this loop must be easy and quick to cycle through. To be
interactive the training part of the loop must take less than
five seconds and generally much faster. The cycle can be
broken down into two components: the UI and the Classifier.
The UI component needs to be simple so the user can remain
focused on the classification problem at hand. The classifier
creation needs to be fast and efficient so the user gets
feedback as quickly as possible, so they are not distracted
from the classification problem.
Figure 4 The classification design loop
40
Although the IML and the machine-learning component of
Crayons are the primary discussion of this paper it is notable
to mention that Crayons has profited from work done by
Viola and Jones [19] and Jaimes and Chang [5,6,7]. Also a
brief example of how Crayons can be used is illustrative. The
sequence of images in Figure 5 shows the process of creating
a classifier using Crayons.
Figure 5 Crayons interaction process
Figure 5 illustrates how the user initially paints very little
data, views the feedback provided by the resulting classifier,
corrects by painting additional class pixels and then iterates
through the cycle. As seen in the first image pair in Figure 5,
only a little data can generate a classifier that roughly learns
skin and background. The classifier, however, over-generalizes
in favor of background; therefore, in the second
image pair you can see skin has been painted where the
classifier previously did poorly at classifying skin. The
resulting classifier shown on the right of the second image
pair shows the new classifier classifying most of the skin on
the hand, but also classifying some of the background as skin.
The classifier is corrected again, and the resulting classifier is
shown as the third image pair in the sequence. Thus, in only
a few iterations, a skin classifier is created.
The simplicity of the example above shows the power that
Crayons has due to the effectiveness of the IML model. The
key issue in the creation of such a tool lies in quickly
generating effective classifiers so the interactive design loop
can be utilized.
MACHINE LEARNING
For the IML model to function, the classifier must be
generated quickly and be able to generalize well. As such we
will first discuss the distinctions between IML and CML,
followed by the problems IML must overcome because of its
interactive setting, and lastly its implementation details
including specific algorithms.
CML vs IML
Classical machine learning generally has the following
assumptions.
There are relatively few carefully chosen features,
There is limited training data,
The classifier must amplify that limited training data
into excellent performance on new training data,
Time to train the classifier is relatively unimportant as
long as it does not take too many days.
None of these assumptions hold in our interactive situation.
Our UI designers have no idea what features will be
appropriate. In fact, we are trying to insulate them from
knowing such things. In our current Crayons prototype there
are more than 150 features per pixel. To reach the breadth of
application that we desire for Crayons we project over 1,000
features will be necessary. The additional features will handle
texture, shape and motion over time. For any given problem
somewhere between three and fifteen of those features will
actually be used, but the classifier algorithm must
automatically make this selection. The classifier we choose
must therefore be able to accommodate such a large number
of features, and/or select only the best features.
In Crayons, when a designer begins to paint classes on an
image a very large number of training examples is quickly
generated. With 77K pixels per image and 20 images one
can rapidly generate over a million training examples. In
practice, the number stays in the 100K examples range
because designers only paint pixels that they need to correct
rather than all pixels in the image. What this means,
however, is that designers can generate a huge amount of
training data very quickly. CML generally focuses on the
ability of a classifier to predict correct behavior on new data.
In IML, however, if the classifier's predictions for new data
are wrong, the designer can rapidly make those corrections.
By rapid feedback and correction the classifier is quickly (in
a matter of minutes) focused onto the desired behavior. The
goal of the classifier is not to predict the designer's intent into
new situations but rapidly reflect intent as expressed in
concrete examples.
Because additional training examples can be added so
readily, IML's bias differs greatly from that of CML.
Because it extrapolates a little data to create a classifier that
will be frequently used in the future, CML is very concerned
about overfit. Overfit is where the trained classifier adheres
41
too closely to the training data rather than deducing general
principles. Cross-validation and other measures are generally
taken to minimize overfit. These measures add substantially
to the training time for CML algorithms. IML's bias is to
include the human in the loop by facilitating rapid correction
of mistakes. Overfit can easily occur, but it is also readily
perceived by the designer and instantly corrected by the
addition of new training data in exactly the areas that are most
problematic. This is shown clearly in Figure 5 where a
designer rapidly provides new data in the edges of the hand
where the generalization failed.
Our interactive classification loop requires that the classifier
training be very fast. To be effective, the classifier must be
generated from the training examples in under five seconds.
If the classifier takes minutes or hours, the process of `train-feedback
-correct' is no longer interactive, and much less
effective as a design tool. Training on 100,000 examples
with 150 features each in less than five seconds is a serious
challenge for most CML algorithms.
Lastly, for this tool to be viable the final classifier will need
to be able to classify 320 x 240 images in less than a fourth of
a second. If the resulting classifier is much slower than this it
becomes impossible to use it to track interactive behavior in a
meaningful way.
IML
Throughout our discussion thus far, many requirements for
the machine-learning algorithm in IML have been made. The
machine-learning algorithm must:
learn/train very quickly,
accommodate 100s to 1000s of features,
perform feature selection,
allow for tens to hundreds of thousands of training
examples.
These requirements put firm bounds on what kind of a
learning algorithm can be used in IML. They invoke the
fundamental question of which machine-learning algorithm
fits all of these criteria. We discuss several options and the
reason why they are not viable before we settle on our
algorithm of choice: decision trees (DT).
Neural Networks [12] are a powerful and often used
machine-learning algorithm. They can provably approximate
any function in two layers. Their strength lies in their
abilities to intelligently integrate a variety of features. Neural
networks also produce relatively small and efficient
classifiers, however, there are not feasible in IML. The
number of features used in systems like Crayons along with
the number of hidden nodes required to produce the kinds of
classifications that are necessary completely overpowers this
algorithm. Even more debilitating is the training time for
neural networks. The time this algorithm takes to converge is
far to long for interactive use. For 150 features this can take
hours or days.
The nearest-neighbor algorithm [1] is easy to train but not
very effective. Besides not being able to discriminate
amongst features, nearest-neighbor has serious problems in
high dimensional feature spaces of the kind needed in IML
and Crayons. Nearest-neighbor generally has a classification
time that is linear in the number of training examples which
also makes it unacceptably slow.
There are yet other algorithms such as boosting that do well
with feature selection, which is a desirable characteristic.
While boosting has shown itself to be very effective on tasks
such as face tracing [18], its lengthy training time is
prohibitive for interactive use in Crayons.
There are many more machine-learning algorithms, however,
this discussion is sufficient to preface to our decision of the
use of decision trees. All the algorithms discussed above
suffer from the curse of dimensionality. When many features
are used (100s to 1000s), their creation and execution times
dramatically increase. In addition, the number of training
examples required to adequately cover such high dimensional
feature spaces would far exceed what designers can produce.
With just one decision per feature the size of the example set
must approach 2
100
, which is completely unacceptable. We
need a classifier that rapidly discards features and focuses on
the 1-10 features that characterize a particular problem.
Decision trees [10] have many appealing properties that
coincide with the requirements of IML. First and foremost is
that the DT algorithm is fundamentally a process of feature
selection. The algorithm operates by examining each feature
and selecting a decision point for dividing the range of that
feature. It then computes the "impurity" of the result of
dividing the training examples at that decision point. One can
think of impurity as measuring the amount of confusion in a
given set. A set of examples that all belong to one class
would be pure (zero impurity). There are a variety of
possible impurity measures [2]. The feature whose partition
yields the least impurity is the one chosen, the set is divided
and the algorithm applied recursively to the divided subsets.
Features that do not provide discrimination between classes
are quickly discarded. The simplicity of DTs also provides
many implementation advantages in terms of speed and space
of the resulting classifier.
Quinlan's original DT algorithm [10] worked only on
features that were discrete (a small number of choices). Our
image features do not have that property. Most of our
features are continuous real values. Many extensions of the
original DT algorithm, ID3, have been made to allow use of
realvalued data [4,11]. All of these algorithms either
discretize the data or by selecting a threshold T for a given
feature F divide the training examples into two sets where
F<T and F>=T. The trick is for each feature to select a value
T that gives the lowest impurity (best classification
improvement). The selection of T from a large number of
features and a large number of training examples is very slow
to do correctly.
42
We have implemented two algorithms, which employ
different division techniques. These two algorithms also
represent the two approaches of longer training time with
better generalization vs. shorter training time with poorer
generalization. The first strategy slightly reduces interactivity
and relies more on learning performance. The second relies
on speed and interactivity. The two strategies are Center
Weighted (CW) and Mean Split (MS).
Our first DT attempt was to order all of the training examples
for each feature and step through all of the examples
calculating the impurity as if the division was between each
of the examples. This yielded a minimum impurity split,
however, this generally provided a best split close to the
beginning or end of the list of examples, still leaving a large
number of examples in one of the divisions. Divisions of this
nature yield deeper and more unbalanced trees, which
correlate to slower classification times. To improve this
algorithm, we developed Center Weighted (CW), which does
the same as above, except that it more heavily weights central
splits (more equal divisions). By insuring that the split
threshold is generally in the middle of the feature range, the
resulting tree tends to be more balanced and the sizes of the
training sets to be examined at each level of the tree drops
exponentially.
CW DTs do, however, suffer from an initial sort of all
training examples for each feature, resulting in a O(f * N log
N) cost up front, where f is the number of features and N the
number of training examples. Since in IML, we assume that
both f and N are large, this can be extremely costly.
Because of the extreme initial cost of sorting all N training
examples f times, we have extended Center Weighted with
CWSS. The `SS' stand for sub-sampled. Since the iteration
through training examples is purely to find a good split, we
can sample the examples to find a statistically sound split.
For example, say N is 100,000, if we sample 1,000 of the
original N, sort those and calculate the best split then our
initial sort is 100 times faster. It is obvious that a better
threshold could be computed using all of the training data, but
this is mitigated by the fact that those data items will still be
considered in lower levels of the tree. When a split decision
is made, all of the training examples are split, not just the sub-sample
. The sub-sampling means that each node's split
decision is never greater than O(f*1000*5), but that
eventually all training data will be considered.
Quinlan used a sampling technique called "windowing".
Windowing initially used a small sample of training examples
and increased the number of training examples used to create
the DT, until all of the original examples were classified
correctly [11]. Our technique, although similar, differs in that
the number of samples is fixed. At each node in the DT a
new sample of fixed size is drawn, allowing misclassified
examples in a higher level of the DT to be considered at a
lower level.
The use of sub-sampling in CWSS produced very slight
differences in classification accuracy as compared to CW, but
reduced training time by a factor of at least two (for training
sets with N
5,000). This factor however will continue to
grow as N increases. (For N = 40,000 CWSS is
approximately 5 times faster than CW; 8 for N = 80,000.)
The CW and CWSS algorithms spend considerable
computing resources in trying to choose a threshold value for
each feature. The Mean Split (MS) algorithm spends very
little time on such decisions and relies on large amounts of
training data to correct decisions at lower levels of the tree.
The MS algorithm uses T=mean(F) as the threshold for
dividing each feature F and compares the impurities of the
divisions of all features. This is very efficient and produces
relatively shallow decision trees by generally dividing the
training set in half at each decision point. Mean split,
however, does not ensure that the division will necessarily
divide the examples at points that are meaningful to correct
classification. Successive splits at lower levels of the tree
will eventually correctly classify the training data, but may
not generalize as well.
The resulting MS decision trees are not as good as those
produced by more careful means such as CW or CWSS.
However, we hypothesized, that the speedup in classification
would improve interactivity and thus reduce the time for
designers to train a classifier. We believe designers make up
for the lower quality of the decision tree with the ability to
correct more rapidly. The key is in optimizing designer
judgment rather than classifier predictions. MSSS is a sub-sampled
version of MS in the same manner as CWSS. In
MSSS, since we just evaluate the impurity at the mean, and
since the mean is a simple statistical value, the resulting
divisions are generally identical to those of straight MS.
As a parenthetical note, another important bottleneck that is
common to all of the classifiers is the necessity to calculate
all features initially to create the classifier. We made the
assumption in IML that all features are pre-calculated and
that the learning part will find the distinguishing features.
Although, this can be optimized so it is faster, all algorithms
will suffer from this bottleneck.
There are many differences between the performances of
each of the algorithms. The most important is that the CW
algorithms train slower than the MS algorithms, but tend to
create better classifiers. Other differences are of note though.
For example, the sub sampled versions, CWSS and MSSS,
generally allowed the classifiers to be generated faster. More
specifically, CWSS was usually twice as fast as CW, as was
MSSS compared to MS.
Because of the gains in speed and lack of loss of
classification power, only CWSS and MSSS will be used for
comparisons. The critical comparison is to see which
algorithm allows the user to create a satisfactory classifier the
fastest. User tests comparing these algorithms are outlined
and presented in the next section.
43
EVALUATIONS
User tests were conducted to evaluate the differences between
CWSS and MSSS. When creating a new perceptual interface
it is not classification time that is the real issue. The
important issue is designer time. As stated before,
classification creation time for CWSS is longer than MSSS,
but the center-weighted algorithms tend to generalize better
than the mean split algorithms. The CWSS generally takes 110
seconds to train on training sets of 10,000-60,000
examples, while MSSS is approximately twice as fast on the
same training sets. These differences are important; as our
hypothesis was that faster classifier creation times can
overcome poorer inductive strength and thus reduce overall
designer time.
To test the difference between CWSS and MSSS we used
three key measurements: wall clock time to create the
classifier, number of classify/correct iterations, and structure
of the resulting tree (depth and number of nodes). The latter
of these three corresponds to the amount of time the classifier
takes to classify an image in actual usage.
In order to test the amount of time a designer takes to create a
good classifier, we need a standard to define "good
classifier". A "gold standard" was created for four different
classification problems: skin-detection, paper card tracking,
robot car tracking and laser tracking. These gold standards
were created by carefully classifying pixels until, in human
judgment, the best possible classification was being
performed on the test images for each problem. The resulting
classifier was then saved as a standard.
Ten total test subjects were used and divided into two groups.
The first five did each task using the CWSS followed by the
MSSS and the remaining five MSSS followed by CWSS.
The users were given each of the problems in turn and asked
to build a classifier. Each time the subject requested a
classifier to be built that classifier's performance was
measured against the performance of the standard classifier
for that task. When the subject's classifier agreed with the
standard on more than 97.5% of the pixels, the test was
declared complete.
Table 1, shows the average times and iterations for the first
group, Table 2, the second group.
CWSS MSSS
Problem Time
Iterations
Time
Iterations
Skin 03:06
4.4
10:35
12.6
Paper Cards
02:29
4.2
02:23
5.0
Robot Car
00:50
1.6
01:00
1.6
Laser 00:46
1.2
00:52
1.4
Table 1 CWSS followed by MSSS
MSSS CWSS
Problem Time
Iterations
Time
Iterations
Skin 10:26
11.4
03:51
3.6
Paper Cards
04:02
5.0
02:37
2.6
Robot Car
01:48
1.2
01:37
1.2
Laser 01:29
1.0
01:16
1.0
Table 2 MSSS followed by CWSS
The laser tracker is a relatively simple classifier because of
the uniqueness of bright red spots [9]. The robot car was
contrasted with a uniform colored carpet and was similarly
straightforward. Identifying colored paper cards against a
cluttered background was more difficult because of the
diversity of the background. The skin tracker is the hardest
because of the diversity of skin color, camera over-saturation
problems and cluttered background [20].
As can be seen in tables 1 and 2, MSSS takes substantially
more designer effort on the hard problems than CWSS. All
subjects specifically stated that CWSS was "faster" than
MSSS especially in the Skin case. (Some did not notice a
difference between the two algorithms while working on the
other problems.) We did not test any of the slower
algorithms such as neural nets or nearest-neighbor.
Interactively these are so poor that the results are self-evident.
We also did not test the full CW algorithm. Its classification
times tend into minutes and clearly could not compete with
the times shown in tables 1 and 2. It is clear from our
evaluations that a classification algorithm must get under the
10-20 second barrier in producing a new classification, but
that once under that barrier, the designer's time begins to
dominate. Once the designer's time begins to dominate the
total time, then the classifier with better generalization wins.
We also mentioned the importance of the tree structure as it
relates to the classification time of an image. Table 3 shows
the average tree structures (tree depth and number of nodes)
as well as the average classification time (ACT) in
milliseconds over the set of test images.
CWSS MSSS
Problem Depth Nodes ACT Depth Nodes ACT
Skin 16.20
577
243
25.60
12530
375
Paper
Cards
15.10 1661 201 16.20 2389 329
Car
13.60 1689 235 15.70 2859 317
Laser 13.00
4860
110
8.20
513
171
Table 3 Tree structures and average classify time (ACT)
As seen in Table 3, depth, number of nodes and ACT, were
all lower in CWSS than in MSSS. This was predicted as
CWSS provides better divisions between the training
examples.
44
While testing we observed that those who used the MSSS
which is fast but less accurate, first, ended up using more
training data, even when they used the CWSS, which usually
generalizes better and needs less data. Those who used the
CWSS first, were pleased with the interactivity of CWSS and
became very frustrated when they used MSSS, even though it
could cycle faster through the interactive loop. In actuality,
because of the poor generalization of the mean split
algorithm, even though the classifier generation time for
MSSS was quicker than CWSS, the users felt it necessary to
paint more using the MSSS, so the overall time increased
using MSSS.
CONCLUSION
When using machine learning in an interactive design setting,
feature selection must be automatic rather than manual and
classifier training-time must be relatively fast. Decision Trees
using a sub-sampling technique to improve training times are
very effective for both of these purposes. Once interactive
speeds are achieved, however, the quality of the classifier's
generalization becomes important. Using tools like Crayons,
demonstrates that machine learning can form an appropriate
basis for the design tools needed to create new perceptual
user interfaces.
REFERENCES
1. Cover, T., and Hart, P. "Nearest Neighbor Pattern
Classification." IEEE Transactions on Information
Theory, 13, (1967) 21-27.
2. Duda, R. O., Hart, P. E., and Stork, D. G., Pattern
Classification. (2001).
3. Fails, J.A., Olsen, D.R. "LightWidgets: Interacting in
Everyday Spaces." Proceedings of IUI '02 (San
Francisco CA, January 2002).
4. Fayyad, U.M. and Irani, K. B. "On the Handling of
Continuous-valued Attributes in Decision Tree
Generation." Machine Learning, 8, 87-102,(1992).
5. Jaimes, A. and Chang, S.-F. "A Conceptual Framework
for Indexing Visual Information at Multiple Levels."
IS&T/SPIE Internet Imaging 2000, (San Jose CA,
January 2000).
6. Jaimes, A. and Chang, S.-F. "Automatic Selection of
Visual Features and Classifier." Storage and Retrieval
for Image and Video Databases VIII, IS&T/SPIE (San
Jose CA, January 2000).
7. Jaimes, A. and Chang, S.-F. "Integrating Multiple
Classifiers in Visual Object Detectors Learned from User
Input." Invited paper, session on Image and Video
Databases, 4th Asian Conference on Computer Vision
(ACCV 2000), Taipei, Taiwan, January 8-11, 2000.
8. Krueger, M. W., Gionfriddo. T., and Hinrichsen, K.,
"VIDEOPLACE -- an artificial reality". Human Factors
in Computing Systems, CHI '85 Conference Proceedings,
ACM Press, 1985, 35-40.
9. Olsen, D.R., Nielsen, T. "Laser Pointer Interaction."
Proceedings of CHI '01 (Seattle WA, March 2001).
10. Quinlan, J. R. "Induction of Decision Trees." Machine
Learning, 1(1); 81-106, (1986).
11. Quinlan, J. R. "C4.5: Programs for machine learning."
Morgan Kaufmann, San Mateo, CA, 1993.
12. Rumelhart, D., Widrow, B., and Lehr, M. "The Basic
Ideas in Neural Networks." Communications of the ACM,
37(3), (1994), pp 87-92.
13. Schmidt, A. "Implicit Human Computer Interaction
Through Context." Personal Technologies, Vol 4(2),
June 2000.
14. Starner, T., Auxier, J. and Ashbrook, D. "The Gesture
Pendant: A Self-illuminating, Wearable, Infrared
Computer Vision System for Home Automation Control
and Medical Monitoring." International Symposium on
Wearable Computing (Atlanta GA, October 2000).
15. Triggs, B. "Model-based Sonar Localisation for Mobile
Robots." Intelligent Robotic Systems '93, Zakopane,
Poland, 1993.
16. Underkoffler, J. and Ishii H. "Illuminating Light: An
Optical Design Tool with a Luminous-Tangible
Interface." Proceedings of CHI '98 (Los Angeles CA,
April 1998).
17. Underkoffler, J., Ullmer, B. and Ishii, H. "Emancipated
Pixels: Real-World Graphics in the Luminous Room."
Proceedings of SIGGRAPH '99 (Los Angeles CA, 1999),
ACM Press, 385-392.
18. Vailaya, A., Zhong, Y., and Jain, A. K. "A hierarchical
system for efficient image retrieval." In Proc. Int. Conf.
on Patt. Recog. (August 1996).
19. Viola, P. and Jones, M. "Robust real-time object
detection." Technical Report 2001/01, Compaq CRL,
February 2001.
20. Yang, M.H. and Ahuja, N. "Gaussian Mixture Model for
Human Skin Color and Its Application in Image and
Video Databases." Proceedings of SPIE '99 (San Jose
CA, Jan 1999), 458-466.
45
| classification;Perceptual interface;image processing;perceptive user interfaces;Perceptual user iinterfaces;Machine learning;image/pixel classifier;Predict correct behaviour;Classification design loop;Interactive machine learning;interaction;Crayons prototype;Image processing with crayons;Crayons design process;Classical machine learning |
117 | Interestingness of Frequent Itemsets Using Bayesian Networks as Background Knowledge | The paper presents a method for pruning frequent itemsets based on background knowledge represented by a Bayesian network. The interestingness of an itemset is defined as the absolute difference between its support estimated from data and from the Bayesian network. Efficient algorithms are presented for finding interestingness of a collection of frequent itemsets, and for finding all attribute sets with a given minimum interestingness. Practical usefulness of the algorithms and their efficiency have been verified experimentally. | INTRODUCTION
Finding frequent itemsets and association rules in database
tables has been an active research area in recent years.
Unfortunately, the practical usefulness of the approach is
limited by huge number of patterns usually discovered. For
larger databases many thousands of association rules may
be produced when minimum support is low. This creates
a secondary data mining problem: after mining the data,
we are now compelled to mine the discovered patterns. The
problem has been addressed in literature mainly in the context
of association rules, where the two main approaches are
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
KDD'04, August 2225, 2004, Seattle, Washington, USA.
Copyright 2004 ACM 1-58113-888-1/04/0008 ...
$
5.00.
sorting rules based on some interestingness measure, and
pruning aiming at removing redundant rules.
Full review of such methods is beyond the scope of this
paper. Overviews of interestingness measures can be found
for example in [3, 13, 11, 32], some of the papers on rule
pruning are [30, 31, 7, 14, 28, 16, 17, 33].
Many interestingness measures are based on the divergence
between true probability distributions and distributions
obtained under the independence assumption. Pruning
methods are usually based on comparing the confidence
of a rule to the confidence of rules related to it.
The main drawback of those methods is that they tend
to generate rules that are either obvious or have already
been known by the user. This is to be expected, since the
most striking patterns which those methods select can also
easily be discovered using traditional methods or are known
directly from experience.
We believe that the proper way to address the problem
is to include users background knowledge in the process.
The patterns which diverge the most from that background
knowledge are deemed most interesting. Discovered patterns
can later be applied to improve the background knowledge
itself.
Many approaches to using background knowledge in machine
learning are focused on using background knowledge
to speed up the hypothesis discovery process and not on discovering
interesting patterns. Those methods often assume
strict logical relationships, not probabilistic ones. Examples
are knowledge based neural networks (KBANNs) and uses
of background knowledge in Inductive Logic Programming.
See Chapter 12 in [20] for an overview of those methods and
a list of further references.
Tuzhilin et. al. [23, 22, 29] worked on applying background
knowledge to finding interesting rules. In [29, 22] interestingness
measures are presented, which take into account
prior beliefs; in another paper [23], the authors present an
algorithm for selecting a minimum set of interesting rules
given background knowledge. The methods used in those
papers are local, that is, they don't use a full joint probability
of the data. Instead, interestingness of a rule is evaluated
using rules in the background knowledge with the same consequent
. If no such knowledge is present for a given rule, the
rule is considered uninteresting. This makes it impossible to
take into account transitivity. Indeed, in the presence of the
background knowledge represented by the rules A B and
178
Research Track Paper
B C, the rule A C is uninteresting. However, this cannot
be discovered locally. See [25] for a detailed discussion
of advantages of global versus local methods. Some more
comparisons can be found in [18].
In this paper we present a method of finding interesting
patterns using background knowledge represented by a
Bayesian network. The main advantage of Bayesian networks
is that they concisely represent full joint probability
distributions, and allow for practically feasible probabilistic
inference from those distributions [25, 15]. Other advantages
include the ability to represent causal relationships, easy to
understand graphical structure, as well as wide availability
of modelling tools. Bayesian networks are also easy to modify
by adding or deleting edges.
We opt to compute interestingness of frequent itemsets
instead of association rules, agreeing with [7] that directions
of dependence should be decided by the user based on her
experience and not suggested by interestingness measures.
Our approach works by estimating supports of itemsets from
Bayesian networks and comparing thus estimated supports
with the data. Itemsets with strongly diverging supports
are considered interesting.
Further definitions of interestingness exploiting Bayesian
network's structure are presented, as well as efficient methods
for computing interestingness of large numbers of itemsets
and for finding all attribute sets with given minimum
interestingness.
There are some analogies between mining emerging patterns
[6] and our approach, the main differences being that
in our case a Bayesian network is used instead of a second
dataset, and that we use a different measure for comparing
supports. Due to those differences our problem requires a
different approach and a different set of algorithms.
DEFINITIONS AND NOTATION
Database attributes will be denoted with uppercase letters
A, B, C, . . ., domain of an attribute A will be denoted
by Dom(A). In this paper we are only concerned with categorical
attributes, that is attributes with finite domains.
Sets of attributes will be denoted with uppercase letters
I, J, . . .. We often use database notation for representing
sets of attributes, i.e. I = A
1
A
2
. . . A
k
instead of the set theoretical
notation {A
1
, A
2
, . . . , A
k
}. Domain of an attribute
set I = A
1
A
2
. . . A
k
is defined as
Dom(I) = Dom(A
1
) Dom(A
2
) . . . Dom(A
k
).
Values from domains of attributes and attribute sets are denoted
with corresponding lowercase boldface letters, e.g. i
Dom(I).
Let P
I
denote a joint probability distribution of the attribute
set I. Similarly let P
I
|J
be a distribution of I conditioned
on J. When used in arithmetic operations such
distributions will be treated as functions of attributes in I
and I J respectively, with values in the interval [0, 1]. For
example P
I
(i) denotes the probability that I = i.
Let P
I
be a probability distribution, and let J I. Denote
by P
J
I
the marginalization of P
I
onto J, that is
P
J
I
=
X
I
\J
P
I
,
(1)
where the summation is over the domains of all variables
from I \ J.
Probability distributions estimated from data will be denoted
by adding a hat symbol, e.g. ^
P
I
.
An itemset is a pair (I, i), where I is an attribute set and
i Dom(I). The support of an itemset (I, i) is defined as
supp(I, i) = ^
P
I
(i),
where the probability is estimated from some dataset. An
itemset is called frequent if its support is greater than or
equal to some user defined threshold minsupp. Finding all
frequent itemsets in a given database table is a well known
datamining problem [1].
A Bayesian network BN over a set of attributes H =
A
1
. . . A
n
is a directed acyclic graph BN = (V, E) with
the set of vertices V = {V
A
1
, . . . , V
A
n
} corresponding to
attributes of H, and a set of edges E V V , where each
vertex V
A
i
has associated a conditional probability distribution
P
A
i
|par
i
, where par
i
= {A
j
: (V
A
j
, V
A
i
) E} is the set
of attributes corresponding to parents of V
A
i
in G. See [25,
15] for a detailed discussion of Bayesian networks.
A Bayesian network BN over H uniquely defines a joint
probability distribution
P
BN
H
=
n
Y
i
=1
P
A
i
|par
i
of H. For I H the distribution over I marginalized from
P
BN
H
will be denoted by P
BN
I
P
BN
I
=
"P
BN
H
"
I
.
INTERESTINGNESS OF AN ATTRIBUTE SET WITH RESPECT TO A BAYESIAN NETWORK
Let us first define the support of an itemset (I, i) in a
Bayesian network BN as
supp
BN
(I, i) = P
BN
I
(i).
Let BN be a Bayesian network over an attribute set H,
and let (I, i) be an itemset such that I H. The interestingness
of the itemset (I, i) with respect to BN is defined
as
I
BN
(I, i) = |supp(I, i) - supp
BN
(I, i)|
that is, the absolute difference between the support of the
itemset estimated from data, and the estimate of this support
made from the Bayesian network BN . In the remaining
part of the paper we assume that interestingness is always
computed with respect to a Bayesian network BN and the
subscript is omitted.
An itemset is -interesting if its interestingness is greater
than or equal to some user specified threshold .
A frequent interesting itemset represents a frequently occurring
(due to minimum support requirement) pattern in
the database whose probability is significantly different from
what it is believed to be based on the Bayesian network
model.
An alternative would be to use supp(I, i)/supp
BN
(I, i)
as the measure of interestingness [6]. We decided to use
absolute difference instead of a quotient since we found it to
be more robust, especially when both supports are small.
One could think of applying our approach to association
rules with the difference in confidences as a measure of interestingness
but, as mentioned in the Introduction, we think
179
Research Track Paper
that patterns which do not suggest a direction of influence
are more appropriate.
Since in Bayesian networks dependencies are modelled using
attributes not itemsets, it will often be easier to talk
about interesting attribute sets, especially when the discovered
interesting patterns are to be used to update the background
knowledge.
Definition
3.1. Let I be an attribute set. The interestingness
of I is defined as
I(I) =
max
iDom(I)
I(I, i),
(2)
analogously, I is -interesting if I(I) .
An alternative approach would be to use generalizations
of Bayesian networks allowing dependencies to vary for different
values of attributes, see [27], and deal with itemset
interestingness directly.
3.1
Extensions to the Definition of Interestingness
Even though applying the above definition and sorting
attribute sets on their interestingness works well in practice,
there might still be a large number of patterns retained,
especially if the background knowledge is not well developed
and large number of attribute sets have high interestingness
values. This motivates the following two definitions.
Definition
3.2. An attribute set I is hierarchically interesting
if it is -interesting and none of its proper subsets
is -interesting.
The idea is to prevent large attribute sets from becoming
interesting when the true cause of them being interesting
lies in their subsets.
There is also another problem with Definition 3.1. Consider
a Bayesian network
A B
where nodes A and B have respective probability distributions
P
A
and P
B
|A
attached. Suppose also that A is interesting
. In this case even if P
B
|A
is the same as ^
P
B
|A
,
attribute sets B and AB may be considered -interesting.
Below we present a definition of interestingness aiming at
preventing such situations.
A vertex V is an ancestor of a vertex W in a directed graph
G if there is a directed path from V to W in G. The set of
ancestors of a vertex V in a graph G is denoted by anc(V ).
Moreover, let us denote by anc(I) the set of all ancestor
attributes in BN of an attribute set I. More formally:
anc
(I) = {A
i
/
I : V
A
i
anc(V
A
j
) in BN, for some A
j
I}.
Definition
3.3. An attribute set I is topologically
-interesting if it is -interesting, and there is no attribute
set J such that
1. J anc(I) I, and
2. I J, and
3. J is -interesting.
The intention here is to prevent interesting attribute sets
from causing all their successors in the Bayesian network
(and the supersets of their successors) to become interesting
in a cascading fashion.
To see why condition 2 is necessary consider a Bayesian
network
A X B
Suppose that there is a dependency between A and B in data
which makes AB -interesting. Now however ABX may
also become interesting, (even if P
A
|X
and P
B
|X
are correct
in the network) and cause AB to be pruned. Condition 2
prevents AB from being pruned and ABX from becoming
interesting.
Notice that topological interestingness is stricter than hierarchical
interestingness. Indeed if J I is -interesting,
then it satisfies all the above conditions, and makes I not
topologically -interesting.
ALGORITHMS FOR FINDING INTERESTING ITEMSETS AND ATTRIBUTE SETS
In this section we present algorithms using the definition
of interestingness introduced in the previous section to select
interesting itemsets or attribute sets. We begin by describing
a procedure for computing marginal distributions for a
large collection of attribute sets from a Bayesian network.
4.1
Computing a Large Number of Marginal
Distributions from a Bayesian Network
Computing the interestingness of a large number of frequent
itemsets requires the computation of a large number of
marginal distributions from a Bayesian network. The problem
has been addressed in literature mainly in the context
of finding marginals for every attribute [25, 15], while here
we have to find marginals for multiple, overlapping sets of
attributes. The approach taken in this paper is outlined
below.
The problem of computing marginal distributions from a
Bayesian network is known to be NP-hard, nevertheless in
most cases the network structure can be exploited to speed
up the computations.
Here we use exact methods for computing the marginals.
Approximate methods like Gibbs sampling are an interesting
topic for future work.
Best known approaches to exact marginalizations are join
trees [12] and bucket elimination [5]. We chose bucket elimination
method which is easier to implement and according
to [5] as efficient as join tree based methods. Also, join
trees are mainly useful for computing marginals for single
attributes, and not for sets of attributes.
The bucket elimination method, which is based on the distributive
law, proceeds by first choosing a variable ordering
and then applying distributive law repeatedly to simplify the
summation. For example suppose that a joint distribution
of a Bayesian network over H = ABC is expressed as
P
BN
ABC
= P
A
P
B
|A
P
C
|A
,
and we want to find P
BN
A
. We need to compute the sum
X
B
X
C
P
A
P
B
|A
P
C
|A
180
Research Track Paper
which can be rewritten as
P
A
0
@ X
b
Dom(B)
P
B
|A
1
A 0@ X
c
Dom(C)
P
C
|A
1
A.
Assuming that domains of all attributes have size 3, computing
the first sum directly requires 12 additions and 18
multiplications, while the second sum requires only 4 additions
and 6 multiplications.
The expression is interpreted as a tree of buckets, each
bucket is either a single probability distribution, or a sum
over a single attribute taken over a product of its child buckets
in the tree. In the example above a special root bucket
without summation could be introduced for completeness.
In most cases the method significantly reduces the time
complexity of the problem. An important problem is choosing
the right variable ordering. Unfortunately that problem
is itself NP-hard. We thus adopt a heuristic which orders
variables according to the decreasing number of factors in
the product depending on each variable. A detailed discussion
of the method can be found in [5].
Although bucket elimination can be used to obtain supports
of itemsets directly (i.e. P
I
(i)), we use it to obtain
complete marginal distributions. This way we can directly
apply marginalization to obtain distributions for subsets of
I (see below).
Since bucket elimination is performed repeatedly we use
memoization to speed it up, as suggested in [21]. We remember
each partial sum and reuse it if possible. In the
example above
P
b
Dom(B)
P
B
|A
,
P
c
Dom(C)
P
C
|A
, and the
computed P
BN
A
would have been remembered.
Another method of obtaining a marginal distribution P
I
is marginalizing it from P
J
where I J using Equation (1),
provided that P
J
is already known. If |J \ I| is small, this
procedure is almost always more efficient than bucket elimination
, so whenever some P
I
is computed by bucket elimination
, distributions of all subsets of I are computed using
Equation (1).
Definition
4.1. Let C be a collection of attribute sets.
The positive border of C [19], denoted by Bd
+
(C), is the
collection of those sets from C which have no proper superset
in C:
Bd
+
(C) = {I C : there is no J C such that I J}.
It is clear from the discussion above that we only need to
use bucket elimination to compute distributions of itemsets
in the positive border. We are going to go further than this;
we will use bucket elimination to obtain supersets of sets
in the positive border, and then use Equation (1) to obtain
marginals even for sets in the positive border. Experiments
show that this approach can give substantial savings, especially
when many overlapping attribute sets from the positive
border can be covered by a single set only slightly larger
then the covered ones.
The algorithm for selecting the marginal distribution to
compute is motivated by the algorithm from [9] for computing
views that should be materialized for OLAP query
processing. Bucket elimination corresponds to creating a
materialized view, and marginalizing thus obtained distribution
to answering OLAP queries.
We first need to define costs of marginalization and bucket
elimination. In our case the cost is defined as the total
number of additions and multiplications used to compute
the marginal distribution.
The cost of marginalizing P
J
from P
I
, J I using Equation
(1) is
cost(P
J
I
) = | Dom(J)| (| Dom(I \ J)| - 1) .
It follows from the fact that each value of P
J
I
requires
adding | Dom(I \ J)| values from P
I
.
The cost of bucket elimination can be computed cheaply
without actually executing the procedure. Each bucket is
either an explicitly given probability distribution, or computes
a sum over a single variable of a product of functions
(computed in buckets contained in it) explicitly represented
as multidimensional tables, see [5] for details. If the bucket
is an explicitly given probability distribution, the cost is of
course 0.
Consider now a bucket b containing child buckets b
1
, . . . , b
n
yielding functions f
1
, . . . , f
n
respectively. Let Var(f
i
) the set
of attributes on which f
i
depends.
Let f = f
1
f
2
f
n
denote the product of all factors in
b. We have Var(f ) =
n
i
=1
Var(f
i
), and since each value
of f requires n - 1 multiplications, computing f requires
| Dom(Var(f ))| (n - 1) multiplications. Let A
b
be the attribute
over which summation in b takes place. Computing
the sum will require | Dom(Var(f ) \ {A
b
})| (| Dom(A
b
)| - 1)
additions.
So the total cost of computing the function in bucket b
(including costs of computing its children) is thus
cost(b)
=
n
X
i
=1
cost(b
i
) + | Dom(Var(f ))| (n - 1)
+ | Dom(Var(f ) \ {A
b
})| (| Dom(A
b
)| - 1).
The cost of computing P
BN
I
through bucket elimination,
denoted cost
BE
(P
BN
I
), is the cost of the root bucket of the
summation used to compute P
BN
I
.
Let C be a collection of attribute sets. The gain of using
bucket elimination to find P
BN
I
for some I while computing
interestingness of attribute sets from C can be expressed as:
gain(I) = -cost
BE
(P
BN
I
) +
X
J
Bd
+
(C),J I
hcost
BE
(P
BN
J
) - cost(P
BN
I
J
)
i.
An attribute set to which bucket elimination will be applied
is found using a greedy procedure by adding in each itera-tion
the attribute giving the highest increase of gain. The
complete algorithm is presented in Figure 1.
4.2
Computing The Interestingness of a Collection
of Itemsets
First we present an algorithm for computing interestingness
of all itemsets in a given collection. Its a simple application
of the algorithm in Figure 1. It is useful if we already
181
Research Track Paper
Input: collection of attribute sets C, Bayesian network BN
Output: distributions P
BN
I
for all I C
1. S Bd
+
(C)
2. while S = :
3.
I an attribute set from S.
4.
for A in H \ I:
5.
compute gain(I {A})
6.
pick A for which the gain in Step 5 was maximal
7.
if gain(I {A }) > gain(I):
8.
I I {A }
9.
goto 4
10.
compute P
BN
I
from BN using bucket elimination
11.
compute P
BN
I
J
for all J S, J I using Equation
(1)
12.
remove from S all attribute sets included in I
13. compute P
BN
J
for all J C \ Bd
+
(C) using Equation
(1)
Figure 1: Algorithm for computing a large number
of marginal distributions from a Bayesian network.
have a collection of itemsets (e.g. all frequent itemsets found
in a database table) and want to select those which are the
most interesting. The algorithm is given below
Input: collection of itemsets K, supports of all itemsets in
K, Bayesian network BN
Output: interestingness of all itemsets in K.
1. C {I : (I, i) K for some i Dom(I)}
2. compute P
BN
I
for all I C using algorithm in Figure 1
3. compute interestingness of all itemsets in K using distributions
computed in step 2
4.3
Finding All Attribute Sets With Given Minimum
Interestingness
In this section we will present an algorithm for finding all
attribute sets with interestingness greater than or equal to a
specified threshold given a dataset and a Bayesian network
BN .
Let us first make an observation:
Observation
4.2. If an itemset (I, i) has interestingness
greater than or equal to
with respect to a Bayesian network
BN then its support must be greater than or equal to
in
either the data or in BN . Moreover if an attribute set is
-interesting, by definition 3.1, at least one of its itemsets
must be -interesting.
It follows that if an attribute set is -interesting, then one
of its itemsets must be frequent, with minimum support ,
either in the data or in the Bayesian network.
Input: Bayesian network BN , minimum support minsupp.
Output: itemsets whose support in BN is minsupp
1. k 1
2. Cand {(I, i) : |I| = 1}
3. compute supp
BN
(I, i) for all (I, i) Cand using the
algorithm in Figure 1
4. F req
k
{(I, i) Cand : supp
BN
(I, i) minsupp}
5. Cand generate new candidates from F req
k
6. remove itemsets with infrequent subsets from Cand
7. k k + 1; goto 3
Figure 2: The AprioriBN algorithm
The algorithm works in two stages. First all frequent itemsets
with minimum support
are found in the dataset and
their interestingness is computed. The first stage might have
missed itemsets which are -interesting but don't have sufficient
support in the data.
In the second stage all itemsets frequent in the Bayesian
network are found, and their supports in the data are computed
using an extra database scan.
To find all itemsets frequent in the Bayesian network we
use the Apriori algorithm [1] with a modified support counting
part, which we call AprioriBN. The sketch of the algorithm
is shown in Figure 2, except for step 3 it is identical
to the original algorithm.
We now have all the elements needed to present the algorithm
for finding all -interesting attribute sets, which is
given in Figure 3.
Step 4 of the algorithm can reuse marginal distributions
found in step 3 to speed up the computations.
Notice that it is always possible to compute interestingness
of every itemset in step 6 since both supports of each
itemset will be computed either in steps 1 and 3, or in steps 4
and 5.
The authors implemented hierarchical and topological interestingness
as a postprocessing step. They could however
be used to prune the attribute sets which are not interesting
without evaluating their distributions, thus providing a
potentially large speedup in the computations. We plan to
investigate that in the future.
EXPERIMENTAL RESULTS
In this section we present experimental evaluation of the
method. One problem we were faced with was the lack
of publicly available datasets with nontrivial background
knowledge that could be represented as a Bayesian network
.
The UCI Machine Learning repository contains a
few datasets with background knowledge (Japanese credit,
molecular biology), but they are aimed primarily at Inductive
Logic Programming: the relationships are logical rather
than probabilistic, only relationships involving the class attribute
are included. These examples are of little value for
our approach.
We have thus used networks constructed using our own
182
Research Track Paper
Input:
Bayesian network BN , dataset, interestingness
threshold .
Output: all attribute sets with interestingness at least ,
and some of the attribute sets with lower interestingness.
1. K {(I, i) : supp(I, i) } (using Apriori algorithm)
2. C {I : (I, i) K for some i Dom(I)}
3. compute P
BN
I
for all I C using algorithm in Figure 1
4. K {(I, i) : supp
BN
(I, i) } (using AprioriBN
algorithm)
5. compute support in data for all itemsets in K \ K by
scanning the dataset
6. compute interestingness of all itemsets in K K
7. C {I : (I, i) K for some i Dom(I)}
8. compute interestingness of all attribute sets I in C C:
I(I) = max{I(I, i) : (I, i) K K , i Dom(I)}
Figure 3: Algorithm for finding all -interesting attribute
sets.
common-sense knowledge as well as networks learned from
data.
5.1
An Illustrative Example
We first present a simple example demonstrating the usefulness
of the method. We use the KSL dataset of Danish
70 year olds, distributed with the DEAL Bayesian network
package [4]. There are nine attributes, described in Table 1,
related to the person's general health and lifestyle. All continuous
attributes have been discretized into 3 levels using
the equal weight method.
FEV
Forced ejection volume of person's lungs
Kol
Cholesterol
Hyp
Hypertension (no/yes)
BMI
Body Mass Index
Smok
Smoking (no/yes)
Alc
Alcohol consumption (seldom/frequently)
Work
Working (yes/no)
Sex
male/female
Year
Survey year (1967/1984)
Table 1: Attributes of the KSL dataset.
We began by designing a network structure based on au-thors'
(non-expert) knowledge. The network structure is
given in Figure 4a. Since we were not sure about the relation
of cholesterol to other attributes, we left it unconnected.
Conditional probabilities were estimated directly from the
KSL
dataset. Note that this is a valid approach since even
when the conditional probabilities match the data perfectly
interesting patterns can still be found because the network
structure usually is not capable of representing the full joint
distribution of the data. The interesting patterns can then
be used to update the network's structure. Of course if both
the structure and the conditional probabilities are given by
a)
b)
Figure 4: Network structures for the KSL dataset
constructed by the authors
the expert, then the discovered patterns can be used to update
both the network's structure and conditional probabilities
.
We applied the algorithm for finding all interesting attribute
sets to the KSL dataset and the network, using the
threshold of 0.01. The attribute sets returned were sorted
by interestingness, and top 10 results were kept.
The two most interesting attribute sets were {F EV, Sex}
with interestingness 0.0812 and {Alc, Y ear} with interestingness
0.0810.
Indeed, it is known (see [8]) that women's lungs are on average
20% - 25% smaller than men's lungs, so sex influences
the forced ejection volume (FEV) much more than smoking
does (which we thought was the primary influence). This
fact, although not new in general, was overlooked by the
authors, and we suspect that, due to large amount of literature
on harmful effects of smoking, it might have been
overlooked by many domain experts. This proves the high
value of our approach for verification of Bayesian network
models.
The data itself implied a growth in alcohol consumption
between 1967 and 1984, which we considered to be a plausible
finding.
We then decided to modify the network structure based on
our findings by adding edges Sex F EV and Y ear Alc.
One could of course consider other methods of modifying
network structure, like deleting edges or reversing their direction
.
A brief overview of more advanced methods of
Bayesian network modification can be found in [15, Chap. 3,
Sect. 3.5]. Instead of adapting the network structure one
could keep the structure unchanged, and tune conditional
probabilities in the network instead, see [15, Chap. 3, Sect. 4]
for details.
183
Research Track Paper
As a method of scoring network structures we used the
natural logarithm of the probability of the structure conditioned
on the data, see [10, 26] for details on computing the
score.
The modified network structure had the score of -7162.71
which is better than that of the original network: -7356.68.
With the modified structure, the most interesting attribute
set was {Kol, Sex, Y ear} with interestingness 0.0665. We
found in the data that cholesterol levels decreased between
the two years in which the study was made, and that cholesterol
level depends on sex. We found similar trends in the
U.S. population based on data from American Heart Association
[2]. Adding edges Y ear Kol and Sex Kol
improved the network score to -7095.25.
{F EV, Alc, Y ear} became the most interesting attribute
set with the interestingness of 0.0286. Its interestingness is
however much lower than that of previous most interesting
attribute sets. Also, we were not able to get any improvement
in network score after adding edges related to that
attribute set.
Since we were unable to obtain a better network in this
case, we used topological pruning, expecting that some other
attribute sets might be the true cause of the observed discrepancies
. Only four attribute sets, given below, were topologically
0.01-interesting.
{Kol, BM I}
0.0144
{Kol, Alc}
0.0126
{Smok, Sex, Y ear}
0.0121
{Alc, W ork}
0.0110
We found all those patters intuitively valid, but were unable
to obtain an improvement in the network's score by
adding related edges. Moreover, the interestingness values
were quite small. We thus finished the interactive network
structure improvement process with the final result given in
Figure 4b.
The algorithm was implemented in Python and used on
a 1.7GHz Pentium 4 machine. The computation of interestingness
for this example took only a few seconds so an
interactive use of the program was possible. Further performance
evaluation is given below.
5.2
Performance Evaluation
We now present the performance evaluation of the algorithm
for finding all attribute sets with given minimum interestingness
. We used the UCI datasets and Bayesian networks
learned from data using B-Course [26]. The results
are given in Table 2.
The max. size column gives the maximum size of frequent
attribute sets considered. The #marginals column gives the
total number of marginal distributions computed from the
Bayesian network. The attribute sets whose marginal distributions
have been cached between the two stages of the
algorithm are not counted twice.
The time does not include the initial run of the Apriori algorithm
used to find frequent itemsets in the data (the time
of the AprioriBN algorithm is included though). The times
for larger networks can be substantial; however the proposed
method has still a huge advantage over manually evaluating
thousands of frequent patterns, and there are several possibilities
to speed up the algorithm not yet implemented by
the authors, discussed in the following section.
0
5000
10000
15000
0
50
100
150
200
250
300
no. of marginals
time [s]
Figure 5: Time of computation depending on the
number of marginal distributions computed for the
lymphography
database
20
30
40
50
60
0
2000
4000
6000
8000
no. of attributes
time [s]
max. size = 3
max. size = 4
Figure 6: Time of computation depending on the
number of attributes for datasets from Table 2
The maximum interestingness column gives the interestingness
of the most interesting attribute set found for a given
dataset. It can be seen that there are still highly interesting
patterns to be found after using classical Bayesian network
learning methods. This proves that frequent pattern and association
rule mining has the capability to discover patterns
which traditional methods might miss.
To give a better understanding of how the algorithm scales
as the problem size increases we present two additional figures
. Figure 5 shows how the computation time increases
with the number of marginal distributions that must be computed
from the Bayesian network. It was obtained by varying
the maximum size of attribute sets between 1 and 5.
The value of
= 0.067 was used (equivalent to one row in
the database). It can be seen that the computation time
grows slightly slower than the number of marginal distributions
. The reason for that is that the more marginal
distributions we need to compute, the more opportunities
we have to avoid using bucket elimination by using direct
marginalization from a superset instead.
Determining how the computation time depends on the
184
Research Track Paper
dataset
#attrs
max. size
#marginals
time [s]
max. inter.
KSL
9
0.01
5
382
1.12
0.032
soybean
36
0.075
3
7633
1292
0.064
soybean
36
0.075
4
61976
7779
0.072
breast-cancer
10
0.01
5
638
3.49
0.082
annealing
40
0.01
3
9920
1006
0.048
annealing
40
0.01
4
92171
6762
0.061
mushroom
23
0.01
3
2048
132.78
0.00036
mushroom
23
0.01
4
10903
580.65
0.00036
lymphography
19
0.067
3
1160
29.12
0.123
lymphography
19
0.067
4
5036
106.13
0.126
splice
61
0.01
3
37882
8456
0.036
Table 2: Performance evaluation of the algorithm for finding all -interesting attribute sets.
size of the network is difficult, because the time depends
also on the network structure and the number of marginal
distributions computed (which in turn depends on the maximum
size of attribute sets considered).
We nevertheless show in Figure 6 the numbers of attributes
and computation times plotted against each other for some
of the datasets from Table 2. Data corresponding to maximum
attribute set sizes equal to 3 and 4 are plotted sepa-rately
.
It can be seen that the algorithm remains practically usable
for fairly large networks of up to 60 variables, even
though the computation time grows exponentially. For larger
networks approximate inference methods might be necessary
, but this is beyond the scope of this paper.
CONCLUSIONS AND DIRECTIONS OF FUTURE RESEARCH
A method of computing interestingness of itemsets and attribute
sets with respect to background knowledge encoded
as a Bayesian network was presented. We built efficient algorithms
for computing interestingness of frequent itemsets
and finding all attribute sets with given minimum interestingness
. Experimental evaluation proved the effectiveness
and practical usefulness of the algorithms for finding interesting
, unexpected patterns.
An obvious direction for future research is increasing efficiency
of the algorithms.
Partial solution would be to
rewrite the code in C, or to use some off-the-shelf highly op-timized
Bayesian network library like Intel's PNL. Another
approach would be to use approximate inference methods
like Gibbs sampling.
Adding or removing edges in a Bayesian network does not
always influence all of its marginal distributions. Interactiv-ity
of network building could be imporved by making use of
this property.
Usefulness of methods developed for mining emerging patterns
[6], especially using borders to represent collections of
itemsets, could also be investigated.
Another interesting direction (suggested by a reviewer)
could be to iteratively apply interesting patterns to modify
the network structure until no further improvement in the
network score can be achieved. A similar procedure has been
used in [24] for background knowledge represented by rules.
It should be noted however that it might be better to just
inform the user about interesting patterns and let him/her
use their experience to update the network. Manually up-dated
network might better reflect causal relationships between
attributes.
Another research area could be evaluating other probabilistic
models such as log-linear models and chain graphs
instead of Bayesian networks.
REFERENCES
[1] R. Agrawal, T. Imielinski, and A. Swami. Mining
association rules between sets of items in large
databases. In Proc. ACM SIGMOD Conference on
Management of Data, pages 207216, Washington,
D.C., 1993.
[2] American Heart Association. Risk factors: High blood
cholesterol and other lipids.
http://www.americanheart.org/downloadable/
heart/1045754065601FS13CHO3.pdf
, 2003.
[3] R. J. Bayardo and R. Agrawal. Mining the most
interesting rules. In Proc. of the 5th ACM SIGKDD
Int'l Conf. on Knowledge Discovery and Data Mining,
pages 145154, August 1999.
[4] Susanne G. Bttcher and Claus Dethlefsen. Deal: A
package for learning bayesian networks.
www.math.auc.dk/novo/Publications/
bottcher:dethlefsen:03.ps
, 2003.
[5] Rina Dechter. Bucket elimination: A unifying
framework for reasoning. Artificial Intelligence,
113(1-2):4185, 1999.
[6] Guozhu Dong and Jinyan Li. Efficient mining of
emerging patterns: Discovering trends and differences.
In Proc. of the 5th Intl. Conf. on Knowledge Discovery
and Data Mining (KDD'99), pages 4352, San Diego,
CA, 1999.
[7] William DuMouchel and Daryl Pregibon. Empirical
bayes screening for multi-item associations. In
Proceedings of the Seventh International Conference
on Knowledge Discovery and Data Mining, pages
6776, 2001.
[8] H. Gray. Gray's Anatomy. Grammercy Books, New
York, 1977.
[9] Venky Harinarayan, Anand Rajaraman, and Jeffrey D.
Ullman. Implementing data cubes efficiently. In Proc.
ACM SIGMOD, pages 205216, 1996.
[10] David Heckerman. A tutorial on learning with
Bayesian networks. Technical Report MSR-TR-95-06,
Microsoft Research, Redmond, WA, 1995.
185
Research Track Paper
[11] R. Hilderman and H. Hamilton. Knowledge discovery
and interestingness measures: A survey. Technical
Report CS 99-04, Department of Computer Science,
University of Regina, 1999.
[12] C. Huang and A. Darwiche. Inference in belief
networks: A procedural guide. Intl. Journal of
Approximate Reasoning, 15(3):225263, 1996.
[13] S. Jaroszewicz and D. A. Simovici. A general measure
of rule interestingness. In 5th European Conference on
Principles of Data Mining and Knowledge Discovery
(PKDD 2001), pages 253265, 2001.
[14] S. Jaroszewicz and D. A. Simovici. Pruning redundant
association rules using maximum entropy principle. In
Advances in Knowledge Discovery and Data Mining,
6th Pacific-Asia Conference, PAKDD'02, pages
135147, Taipei, Taiwan, May 2002.
[15] Finn V. Jensen. Bayesian Networks and Decision
Graphs. Springer Verlag, New York, 2001.
[16] Bing Liu, Wynne Hsu, and Shu Chen. Using general
impressions to analyze discovered classification rules.
In Proceedings of the Third International Conference
on Knowledge Discovery and Data Mining (KDD-97),
page 31. AAAI Press, 1997.
[17] Bing Liu, Wynne Jsu, Yiming Ma, and Shu Chen.
Mining interesting knowledge using DM-II. In
Proceedings of the Fifth ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining,
pages 430434, N.Y., August 1518 1999.
[18] Heikki Mannila. Local and global methods in data
mining: Basic techniques and open problems. In
ICALP 2002, 29th International Colloquium on
Automata, Languages, and Programming, Malaga,
Spain, July 2002. Springer-Verlag.
[19] Heikki Mannila and Hannu Toivonen. Levelwise search
and borders of theories in knowledge discovery. Data
Mining and Knowledge Discovery, 1(3):241258, 1997.
[20] T.M. Mitchell. Machine Learning. McGraw-Hill, 1997.
[21] Kevin Murphy. A brief introduction to graphical
models and bayesian networks.
http://www.ai.mit.edu/~murphyk/Bayes/
bnintro.html
, 1998.
[22] B. Padmanabhan and A. Tuzhilin. Belief-driven
method for discovering unexpected patterns. In
Proceedings. of the 4th International Conference on
Knowledge Discovery and Data Mining (KDD'98),
pages 94100, August 1998.
[23] B. Padmanabhan and A. Tuzhilin. Small is beautiful:
discovering the minimal set of unexpected patterns. In
Proceedinmgs of the 6th ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining
(KDD'00), pages 5463, N. Y., August 2000.
[24] B. Padmanabhan and A. Tuzhilin. Methods for
knowledge refinement based on unexpected patterns.
Decision Support Systems, 33(3):221347, July 2002.
[25] Judea Pearl. Probabilistic Reasoning in Intelligent
Systems. Morgan Kaufmann, Los Altos, CA, 1998.
[26] P.Myllym
aki, T.Silander, H.Tirri, and P.Uronen.
B-course: A web-based tool for bayesian and causal
data analysis. International Journal on Artificial
Intelligence Tools, 11(3):369387, 2002.
[27] D. Poole and N. L. Zhang. Exploiting contextual
independence in probablisitic inference. Journal of
Artificial Intelligence Research, 18:263313, 2003.
[28] D. Shah, L. V. S. Lakshmanan, K. Ramamritham, and
S. Sudarshan. Interestingness and pruning of mined
patterns. In 1999 ACM SIGMOD Workshop on
Research Issues in Data Mining and Knowledge
Discovery, 1999.
[29] Abraham Silberschatz and Alexander Tuzhilin. On
subjective measures of interestingness in knowledge
discovery. In Knowledge Discovery and Data Mining,
pages 275281, 1995.
[30] E. Suzuki. Autonomous discovery of reliable exception
rules. In Proceedings of the Third International
Conference on Knowledge Discovery and Data Mining
(KDD-97), page 259. AAAI Press, 1997.
[31] E. Suzuki and Y. Kodratoff. Discovery of surprising
exception rules based on intensity of implication. In
Proc of PKDD-98, Nantes, France, pages 1018, 1998.
[32] P.-N. Tan, V. Kumar, and J. Srivastava. Selecting the
right interestingness measure for association patterns.
In Proc of the Eighth ACM SIGKDD Int'l Conf. on
Knowledge Discovery and Data Mining (KDD-2002),
pages 3241, 2002.
[33] M. J. Zaki. Generating non-redundant association
rules. In Proceedinmgs of the 6th ACM SIGKDD
International Conference on Knowledge Discovery and
Data Mining (KDD-00), pages 3443, N. Y.,
August 2023 2000.
186
Research Track Paper
| Bayesian network;frequent itemsets;association rules;interestingness;emerging pattern;association rule;background knowledge;frequent itemset |
118 | Interference Evaluation of Bluetooth and IEEE 802.11b Systems | The emergence of several radio technologies, such as Bluetooth and IEEE 802.11, operating in the 2.4 GHz unlicensed ISM frequency band, may lead to signal interference and result in significant performance degradation when devices are colocated in the same environment. The main goal of this paper is to evaluate the effect of mutual interference on the performance of Bluetooth and IEEE 802.11b systems. We develop a simulation framework for modeling interference based on detailed MAC and PHY models. First, we use a simple simulation scenario to highlight the effects of parameters, such as transmission power, offered load, and traffic type. We then turn to more complex scenarios involving multiple Bluetooth piconets and WLAN devices. | Introduction
The proliferation of mobile computing devices including laptops
, personal digital assistants (PDAs), and wearable computers
has created a demand for wireless personal area networks
(WPANs). WPANs allow closely located devices to
share information and resources.
A key challenge in the
design of WPANs is adapting to a hostile radio environment
that includes noise, time-varying channels, and abundant
electromagnetic interference. Today, most radio technologies
considered by WPANs (Bluetooth Special Interest
Group [2], and IEEE 802.15) employ the 2.4 GHz ISM frequency
band, which is also used by Local Area Network
(WLAN) devices implementing the IEEE 802.11b standard
specifications [9]. It is anticipated that some interference will
result from all these technologies operating in the same environment
. WLAN devices operating in proximity to WPAN
devices may significantly impact the performance of WPAN
and vice versa.
The main goal of this paper is to present our findings on the
performance of these systems when operating in close proximity
to each other. Our results are based on detailed models
for the MAC, PHY, and wireless channel. Recently, a number
of research activities has led to the development of tools for
wireless network simulation [1,16]. While some of these tools
include a PHY layer implementation, it is often abstracted to
a discrete channel model that does not implement interference
per se. Therefore, in order to model interference and capture
the time and frequency collisions, we chose to implement an
integrated MAC-PHY module.
Efforts to study interference in the 2.4 GHz band are relatively
recent. For example, interference caused by microwave
ovens operating in the vicinity of a WLAN network has been
investigated [17] and requirements on the signal-to-noise ratio
(SNR) are presented by Kamerman and Erkocevic [11].
Corresponding author.
E-mail: [email protected]
In addition, there has been several attempts at quantifying
the impact of interference on both the WLAN and Bluetooth
performance. Published results can be classified into at least
three categories depending on whether they rely on analysis,
simulation, or experimental measurements.
Analytical results based on probability of packet collision
were obtained by Shellhammer [13], Ennis [4], and
Zyren [18] for the WLAN packet error and by Golmie [6]
for the Bluetooth packet error. In all these cases, the probability
of packet error is computed based on the probability of
packet collision in time and frequency. Although these analytical
results can often give a first order approximation on the
impact of interference and the resulting performance degradation
, they often make assumptions concerning the traffic
distributions and the operation of the media access protocol,
which can make them less realistic. More importantly, in order
for the analysis to be tractable, mutual interference that
can change the traffic distribution for each system is often ig-nored
.
On the other hand, experimental results, such as the ones
obtained by Kamerman [10], Howitt et al. [8], and Fumolari
[5] for a two-node WLAN system and a two-node Bluetooth
piconet, can be considered more accurate at the cost
of being too specific to the implementation tested. Thus, a
third alternative consists of using modeling and simulation to
evaluate the impact of interference. This third approach can
provide a more flexible framework. Zurbes et al. [19] present
simulation results for a number of Bluetooth devices located
in a single large room. They show that for 100 concurrent
web sessions, performance is degraded by only 5%. Golmie
et al. [7] use a detailed MAC and PHY simulation framework
to evaluate the impact of interference for a pair of WLAN
devices and a pair of Bluetooth devices. Similar results have
been obtained by Lansford et al. [12] for the case of colocated
WLAN and Bluetooth devices on the same laptop. Their simulation
models are based on a link budget analysis and a theoretical
calculation of the BER (Q function calculation). The
work in this paper is an extension of [7].
202
GOLMIE ET AL.
Figure 1. Master TX/RX hopping sequence.
This paper is organized as follows. In section 2, we give
some general insights on the Bluetooth and IEEE 802.11 protocol
operation. In section 3, we describe in great detail our
modeling approach for the MAC, PHY and wireless channel.
In section 4, we evaluate the impact of interference on both
Bluetooth and WLAN performance and present simulation results
. Concluding remarks are offered in section 5.
Protocol overview
In this section, we give a brief overview of the Bluetooth technology
[2] and discuss the main functionality of its protocol
specifications. Bluetooth is a short range (010 m) wireless
link technology aimed at replacing non-interoperable proprietary
cables that connect phones, laptops, PDAs and other
portable devices together. Bluetooth operates in the ISM frequency
band starting at 2.402 GHz and ending at 2.483 GHz
in the USA and Europe. 79 RF channels of 1 MHz width are
defined. The air interface is based on an antenna power of
1 mW with an antenna gain of 0 dB. The signal is modulated
using binary Gaussian Frequency Shift Keying (GFSK). The
raw data rate is defined at 1 Mbit/s. A Time Division Multiplexing
(TDM) technique divides the channel into 625 s
slots. Transmission occurs in packets that occupy an odd
number of slots (up to 5). Each packet is transmitted on a
different hop frequency with a maximum frequency hopping
rate of 1600 hops/s.
Two or more units communicating on the same channel
form a piconet, where one unit operates as a master and the
others (a maximum of seven active at the same time) act as
slaves. A channel is defined as a unique pseudo-random frequency
hopping sequence derived from the master device's
48-bit address and its Bluetooth clock value. Slaves in the
piconet synchronize their timing and frequency hopping to
the master upon connection establishment. In the connection
mode, the master controls the access to the channel using a
polling scheme where master and slave transmissions alternate
. A slave packet always follows a master packet transmission
as illustrated in figure 1, which depicts the master's
view of the slotted TX/RX channel.
There are two types of link connections that can be
established between a master and a slave: the Synchronous
Connection-Oriented (SCO), and the Asynchronous
Connection-Less (ACL) link. The SCO link is a symmetric
point-to-point connection between a master and a slave
where the master sends an SCO packet in one TX slot at regular
time intervals, defined by T
SCO
time slots. The slave responds
with an SCO packet in the next TX opportunity. T
SCO
is set to either 2, 4 or 6 time slots for HV1, HV2, or HV3
packet formats, respectively. All three formats of SCO packets
are defined to carry 64 Kbit/s of voice traffic and are never
retransmitted in case of packet loss or error.
The ACL link is an asymmetric point-to-point connection
between a master and active slaves in the piconet. An Automatic
Repeat Request (ARQ) procedure is applied to ACL
packets where packets are retransmitted in case of loss until
a positive acknowledgement (ACK) is received at the source.
The ACK is piggy-backed in the header of the returned packet
where an ARQN bit is set to either 1 or 0 depending on
whether or not the previous packet was successfully received.
In addition, a sequence number (SEQN) bit is used in the
packet header in order to provide a sequential ordering of data
packets in a stream and filter out retransmissions at the destination
. Forward Error Correction (FEC) is used on some
SCO and ACL packets in order to correct errors and reduce
the number of ACL retransmissions.
Both ACL and SCO packets have the same packet format.
It consists of a 72-bit access code used for message identification
and synchronization, a 54-bit header and a variable
length payload that contains either a voice or a data packet
depending on the type of link connection that is established
between a master and a slave.
A repetition code of rate 1/3 is applied to the header, and
a block code with minimum distance, d
min
, equal to 14, is applied
to the access code so that up to 13 errors are detected and
(d
min
- 1)/2 = 6 can be corrected. Note that uncorrected
errors in the header and the access code lead to a packet drop.
Voice packets have a total packet length of 366 bits including
the access code and header. A repetition code of 1/3 is used
for HV1 packet payload. On the other hand, DM and HV2
packet payloads use a 2/3 block code where every 10 bits of
information are encoded with 15 bits. DH and HV3 packets
do not have any encoding on their payload. HV packets do
INTERFERENCE EVALUATION OF BLUETOOTH AND IEEE 802.11b SYSTEMS
203
Figure 2. WLAN frame transmission scheme.
Table 1
Summary of error occurrences in the packet and actions taken in case errors
are not corrected.
Error location
Error correction
Action taken
Access code
d
min
= 14
Packet dropped
Packet header
1/3 repetition
Packet dropped
HV1 payload
1/3 repetition
Packet accepted
HV2 payload
2/3 block code
Packet accepted
HV3 payload
No FEC
Packet accepted
DM1, DM3, DM5 payload
2/3 block code
Packet dropped
DH1, DH3, DH5 payload
No FEC
Packet accepted
not have a CRC in the payload. In case of an error occurrence
in the payload, the packet is never dropped. Uncorrected errors
for DM and DH packets lead to dropped packets and the
application of the ARQ and SEQN schemes. Table 1 summarizes
the error occurrences in the packet and the actions taken
by the protocol.
2.2. IEEE 802.11b
The IEEE 802.11 standard [9] defines both the physical
(PHY) and medium access control (MAC) layer protocols
for WLANs. In this sequel, we shall be using WLAN and
802.11b interchangeably.
The IEEE 802.11 standard calls for three different PHY
specifications: frequency hopping (FH) spread spectrum, direct
sequence (DS) spread spectrum, and infrared (IR). The
transmit power for DS and FH devices is defined at a maximum
of 1 W and the receiver sensitivity is set to
-80 dBmW.
Antenna gain is limited to 6 dB maximum. In this work,
we focus on the 802.11b specification (DS spread spectrum)
since it is in the same frequency band as Bluetooth and the
most commonly deployed.
The basic data rate for the DS system is 1 Mbit/s encoded
with differential binary phase shift keying (DBPSK). Similarly
, a 2 Mbit/s rate is provided using differential quadrature
phase shift keying (DQPSK) at the same chip rate of
11
10
6
chips/s. Higher rates of 5.5 and 11 Mbit/s are also
available using techniques combining quadrature phase shift
keying and complementary code keying (CCK); all of these
systems use 22 MHz channels.
The IEEE 802.11 MAC layer specifications, common to all
PHYs and data rates, coordinate the communication between
stations and control the behavior of users who want to access
the network. The Distributed Coordination Function (DCF),
which describes the default MAC protocol operation, is based
on a scheme known as carrier-sense, multiple access, collision
avoidance (CSMA/CA). Both the MAC and PHY layers cooperate
in order to implement collision avoidance procedures.
The PHY layer samples the received energy over the medium
transmitting data and uses a clear channel assessment (CCA)
algorithm to determine if the channel is clear. This is accomplished
by measuring the RF energy at the antenna and determining
the strength of the received signal commonly known
as RSSI, or received signal strength indicator. In addition,
carrier sense can be used to determine if the channel is available
. This technique is more selective since it verifies that the
signal is the same carrier type as 802.11 transmitters. In all
of our simulations, we use carrier sense and not RSSI to determine
if the channel is busy. Thus, a Bluetooth signal will
corrupt WLAN packets, but it will not cause the WLAN to
defer transmission.
A virtual carrier sense mechanism is also provided at the
MAC layer. It uses the request-to-send (RTS) and clear-to-send
(CTS) message exchange to make predictions of future
traffic on the medium and updates the network allocation vector
(NAV) available in stations. Communication is established
when one of the wireless nodes sends a short RTS frame. The
receiving station issues a CTS frame that echoes the sender's
address. If the CTS frame is not received, it is assumed that a
collision occurred and the RTS process starts over. Regardless
of whether the virtual carrier sense routine is used or
not, the MAC is required to implement a basic access procedure
(depicted in figure 2) as follows. If a station has data to
send, it waits for the channel to be idle through the use of the
CSMA/CA algorithm. If the medium is sensed idle for a period
greater than a DCF interframe space (DIFS), the station
goes into a backoff procedure before it sends its frame. Upon
the successful reception of a frame, the destination station returns
an ACK frame after a Short interframe space (SIFS).
The backoff window is based on a random value uniformly
distributed in the interval
[CW
min
,
CW
max
], where CW
min
and CW
max
represent the Contention Window parameters. If
the medium is determined busy at any time during the backoff
slot, the backoff procedure is suspended. It is resumed after
the medium has been idle for the duration of the DIFS period.
If an ACK is not received within an ACK timeout interval, the
station assumes that either the data frame or the ACK was lost
204
GOLMIE ET AL.
and needs to retransmit its data frame by repeating the basic
access procedure.
Errors are detected by checking the Frame Check Sequence
(FCS) that is appended to the packet payload. In case
an error is found, the packet is dropped and is then later retransmitted
Integrated simulation model
In this section, we describe the methodology and platform
used to conduct the performance evaluation. The simulation
environment consists of detailed models for the RF channel,
the PHY, and MAC layers developed in C and OPNET (for the
MAC layer). These detailed simulation models constitute an
evaluation framework that is critical to studying the various
intricate effects between the MAC and PHY layers. Although
interference is typically associated with the RF channel modeling
and measured at the PHY layer, it can significantly impact
the performance of higher layer applications including
the MAC layer. Similarly, changes in the behavior of the
MAC layer protocol and the associated data traffic distribution
can play an important factor in the interference scenario
and affect the overall system performance.
Figure 3 shows a packet being potentially corrupted by two
interference packets. Consider that the desired packet is from
the WLAN and the interference packets are Bluetooth (the
figure is equally valid if the roles are reversed, except that
the frequencies of the packets will be different). For interference
to occur, the packets must overlap in both time and
frequency. That is, the interference packets must be within
the 22 MHz bandwidth of the WLAN. In a system with many
Bluetooth piconets, there may be interference from more than
one packet at any given time. We define a period of stationarity
(POS) as the time during which the interference is constant
. For example, t
i
t
t
i
+1
is such a period, as is
t
i
+1
t
t
i
+2
.
Even during a POS where there is one or more interferers,
the number and location of bit errors in the desired packet depends
on a number of factors: (1) the signal-to-interference
ratio (SIR) and the signal-to-noise ratio at the receiver, (2) the
type of modulation used by the transmitter and the interferer,
and (3) the channel model. For this reason, it is essential to
use accurate models of the PHY and channel, as described
below. Just because two packets overlap in time and frequency
does not necessary lead to bit errors and the consequent
packet loss. While one can use (semi-)analytic models
instead, such as approximating Bluetooth interference on
WLAN as a narrowband tone jammer, the use of detailed signal
processing-based models better allows one to handle multiple
simultaneous interferers.
In order to simulate the overall system, an interface module
was created that allows the MAC models to use the physical
layer and channel models. This interface module captures
all changes in the channel state (mainly in the energy level).
Consider the Bluetooth transmitterchannelreceiver chain of
processes. For a given packet, the transmitter creates a set of
Figure 3. Packet collision and placement of errors. The bit error rate (BER)
is roughly constant during each of the three indicated periods.
signal samples that are corrupted by the channel and input to
the receiver; interference may be present for all or only specific
periods of stationarity, as shown in figure 3. A similar
chain of processing occurs for an 802.11b packet. The interface
module is designed to process a packet at a time.
At the end of each packet transmission, the MAC layer
generates a data structure that contains all the information required
to process the packet. This structure includes a list of
all the interfering packets with their respective duration, timing
offset, frequency, and transmitted power. The topology of
the scenario is also included. The data structure is then passed
to the physical layer along with a stream of bits representing
the packet being transmitted. The physical layer returns the
bit stream after placing the errors resulting from the interference
.
3.1. MAC model
We used OPNET to develop a simulation model for the Bluetooth
and IEEE 802.11 protocols. For Bluetooth, we implemented
the access protocol according to the specifications [2].
We assume that a connection is already established between
the master and the slave and that the synchronization process
is complete. The Bluetooth hopping pattern algorithm is implemented
. Details of the algorithm are provided in section
2.1. A pseudo-random number generator is used instead
of the implementation specific circuitry that uses the master's
clock and 48-bit address to derive a random number.
For the IEEE 802.11 protocol, we used the model available
in the OPNET library and modified it to bypass the OPNET
radio model and to use our MAC/PHY interface module. We
focus in this study on the Direct Sequence mode which uses a
fixed frequency that occupies 22 MHz of the frequency band.
The center frequency is set to 2.437 GHz.
At the MAC layer, a set of performance metrics are defined
including probability of packet loss. Packet loss measures the
number of packets discarded at the MAC layer due to errors
in the bit stream. This measure is calculated after performing
error correction.
3.2. PHY model
The transmitters, channel, and receivers are implemented
at complex baseband.
For a given transmitter, inphase
INTERFERENCE EVALUATION OF BLUETOOTH AND IEEE 802.11b SYSTEMS
205
and quadrature samples are generated at a sampling rate of
44
10
6
per second. This rate provides four samples/symbol
for the 11 Mbit/s 802.11 mode, enough to implement a good
receiver.
It is also high enough to allow digital modulation
of the Bluetooth signal to account for its frequency hopping
. Specifically, since the Bluetooth signal is approximately
1 MHz wide, it can be modulated up to almost 22 MHz,
which is more than enough to cover the 11 MHz bandwidth
(one-sided) of the 802.11 signal. The received complex samples
from both the desired transmitter and the interferer(s) are
added together at the receiver.
While there are a number of possible Bluetooth receiver
designs, we chose to implement the noncoherent limiter-discriminator
(LD) receiver [3,14]. Its simplicity and relatively
low cost should make it the most common type for
many consumer applications. Details of the actual design are
given in [15].
In the 802.11b CCK receiver, each group of eight information
bits chooses a sequence of eight consecutive chips that
forms a symbol. As before, the inphase and quadrature components
of these chips are transmitted. The receiver looks at
the received symbol and decides which was the most likely
transmitted one. While one can implement this decoding procedure
by correlating against all 256 possible symbols, we
chose a slightly sub-optimal, but considerably faster architecture
similar to the WalshHadamard transform; again details
can be found in [15].
3.3. Channel model
The channel model consists of a geometry-based propagation
model for the signals, as well as a noise model. For the indoor
channel, we apply a propagation model consisting of
two parts: (1) line-of-sight propagation (free-space) for the
first 8 m, and (2) a propagation exponent of 3.3 for distances
over 8 m. Consequently, the path loss in dB is given by
L
p
=
32.45
+ 20 log(f d) if d < 8 m,
58.3
+ 33 log d8
otherwise,
(1)
where f is the frequency in GHz, and d is the distance in meters
. This model is similar to the one used by Kamerman [10].
Assuming unit gain for the transmitter and receiver antennas
and ignoring additional losses, the received power in dBmW
is
P
R
= P
T
- L
p
,
(2)
where P
T
is the transmitted power also in dBmW. Equation
(2) is used for calculating the power received at a given
point due to either a Bluetooth or an 802.11 transmitter, since
this equation does not depend on the modulation method.
The main parameter that drives the PHY layer performance
is the signal-to-interference ratio between the desired signal
and the interfering signal. This ratio is given in dB by
SIR
= P
R
- P
I
,
(3)
where P
I
is the interference power at the receiver. In the absence
of interference, the bit error rate for either the Bluetooth
or WLAN system is almost negligible for the transmitter powers
and ranges under consideration.
To complete the channel model, noise is added to the received
samples, according to the specified SNR. In decibels,
the signal-to-noise ratio is defined by SNR
= P
R
-S
R
, where
P
R
is the received signal power, and S
R
is the receiver's sensitivity
in dBmW; this latter value is dependent on the receiver
model and so is an input parameter. Additive white Gaussian
noise (AWGN) is used to model the noise at the receivers.
3.4. Model validation
The results obtained from the simulation models were validated
against experimental and analytical results.
Since the implementation of the PHY layer required
choosing a number of design parameters, the first step in the
validation process is comparing the PHY results against theoretical
results. Complete BER curves of the Bluetooth and
802.11b systems are given in [15]; for the AWGN and flat
Rician channels without interference, all the results match
very closely to analytical bounds and other simulation results.
Also, the simulation results for both the MAC and PHY models
were compared and validated against analytical results for
packet loss given different traffic scenarios [6].
For the experimental testing, we use the topology in figure
4 and compare the packet loss observed for Bluetooth
voice and WLAN data with the simulation results in figure 5.
The experimental and simulation results are in good agreement
Simulation results
We present simulation results to evaluate the performance of
Bluetooth in the presence of WLAN interference and vice
versa. First, we consider the effects of parameters such as
transmitted power, offered load, hop rate, and traffic type on
interference. Second, we look at two realistic interference
scenarios to quantify the severity of the performance degradation
for the Bluetooth and WLAN systems.
4.1. Factors effecting interference
We first consider a four node topology consisting of two
WLAN devices and two Bluetooth devices (one master and
one slave) as shown in figure 4. The WLAN access point
(AP) is located at (0, 15) m, and the WLAN mobile is fixed at
(
0, 1) m. The Bluetooth slave device is fixed at (0, 0) m and
the master is fixed at (1, 0) m.
In an effort to control the interference on Bluetooth and
WLAN, we define two scenarios. In the first scenario, we
let the mobile be the generator of 802.11 data, while the AP
is the sink. In this case, the interference is from the mobile
sending data packets to the AP and receiving acknowledgments
(ACKs) from it. Since most of the WLAN traffic is
206
GOLMIE ET AL.
Figure 4. Topology 1. Two WLAN devices and one Bluetooth piconet.
Table 2
Summary of the scenarios.
Scenario
Desired
Interferer
WLAN
WLAN
signal
signal
AP
mobile
1
Bluetooth
WLAN
Sink
Source
2
WLAN
Bluetooth
Source
Sink
originating close to the Bluetooth piconet, both the master and
the slave may suffer from serious interference. In the second
scenario, the traffic is generated at the AP and received at the
WLAN mobile. Because the data packets are generally longer
then the ACKs, this is a more critical scenario for the WLAN
then when the mobile is the source. Table 2 summarizes the
two scenarios.
For Bluetooth, we consider two types of applications,
voice and data. For voice, we assume a symmetric stream
of 64 Kbit/s each way using HV1 packet encapsulation. For
data traffic, we consider a source that generates DM5 packets.
The packet interarrival time is exponentially distributed, and
its mean in seconds is computed according to
t
B
= 2 n
s
T
s
,
(4)
where is the offered load; n
s
is the number of slots occupied
by a packet. For DM5, n
s
= 5. T
s
is the slot size equal to
625 s.
For WLAN, we use the 11 Mbit/s mode and consider a
data application. Typical applications for WLAN could be
ftp or http. However, since we are mainly interested in the
MAC layer performance, we abstract the parameters for the
application model to packet size and offered load and do not
model the entire TCP/IP stack. We fix the packet payload to
12,000 bits which is the maximum size for the MAC payload
data unit, and vary . The packet interarrival time in seconds,
t
W
, is exponentially distributed, and its mean is computed acTable
3
Simulation parameters
Simulation parameters
Values
Propagation delay
5 s/km
Length of simulation run
30 s
Bluetooth parameters
ACL Baseband Packet Encapsulation
DM5
SCO Baseband Packet Encapsulation
HV1
Transmitted Power
1 mW
WLAN parameters
Transmitted power
25 mW
Packet header
224 bits
Packet payload
12,000 bits
cording to
t
W
= 192/1,000,000 + 12,224/11,000,000
,
(5)
where the 192-bit PLCP header is sent at 1 Mbit/s and the
payload at 11 Mbit/s. Unless specified otherwise, we use the
configuration and system parameters shown in table 3.
For scenarios 1 and 2, we run 15 trials using a different
random seed for each trial. In addition to plotting the mean
value, confidence intervals, showing plus and minus two standard
deviations, are also included. From figures 5 and 6, one
sees that the statistical variation around the mean values are
very small. In addition to the comparisons with analytical and
experimental results described in section 3.4, this fact provides
further validation for the results.
4.1.1. WLAN transmission power
First, we look at the effect on Bluetooth of increasing the
WLAN transmission power in scenario 1; that is, increasing
the interferer transmission power on the victim signal. Since
power control algorithms exist in many WLAN implementa-tions
, it is important to consider how varying the transmitted
power changes the interference. However, since Bluetooth
was designed as a low power device, we fix its transmitter
power at 1 mW for all simulations.
We fix WLAN to 60% for different Bluetooth traffic
types and values of . In figure 5(a), we note a saturation
effect around 10 mW. A threshold, which is close to 22/79,
corresponds to the probability that Bluetooth is hopping in the
WLAN occupied band. Thus, increasing the WLAN transmission
power beyond 10 mW does not affect the Bluetooth
packet loss. Between 1 and 5 mW, a small change in the
WLAN transmitted power triples the Bluetooth packet loss.
Please note the relative positions of the packet loss curves for
different values of between 1 and 5 mW; as increases, the
packet loss is higher. Also, note that Bluetooth voice has the
lowest packet loss, partly due to its short packet size. A second
reason for the low loss probability is that voice packets
are rejected only if there are errors in the access code or
packet headers, cf. table 1. A packet may be accepted with
a relatively large number of bit errors in the payload, which
may lead to a substantial reduction in subjective voice quality.
Figure 5(b) shows the probability of packet loss for the
WLAN mobile device.
This corresponds to ACKs being
INTERFERENCE EVALUATION OF BLUETOOTH AND IEEE 802.11b SYSTEMS
207
(a)
(b)
(c)
Figure 5. WLAN
= 60%. (a) Scenario 1. Probability of packet loss for the Bluetooth slave. (b) Scenario 1. Probability of packet loss for the WLAN
mobile. (c) Scenario 2. Probability of packet loss for the WLAN mobile.
dropped at the WLAN source. The general trend is that the
packet loss decreases as the WLAN transmitted power increases
. However, we notice a slight "bump" between 1 and
5 mW. This is due to the effect of closed-loop interference.
The WLAN source increases its transmitted power and causes
more interference on the Bluetooth devices; as a result, there
are more retransmissions in both the Bluetooth and WLAN
piconets, which causes more lost ACKs at the WLAN source.
Next, we consider the effect of increasing the WLAN
transmission power on the WLAN performance in scenario 2.
From figure 5(c), we observe that even if the WLAN transmission
power is fifty times more than the Bluetooth transmission
power (fixed at 1 mW), the packet loss for the WLAN
does not change. This leads us to an interesting observation
on power control. Basically, we note that increasing the
transmission power does not necessarily improve the performance
. However, decreasing the transmission power is usu-ally
a "good neighbor" strategy that may help reduce the interference
on other devices.
4.1.2. Offered load
The offered load, also referred to in some cases as duty cycle
, is an interesting parameter to track. Consider scenario 1
where Bluetooth is the interferer and fix the WLAN transmission
power to 25 mW. We observe that for the WLAN, the
packet loss is proportional to the Bluetooth offered load as
shown in figure 6. For equal 20%, 50%, and 100%, the
packet loss is 7%, 15%, and 25%, respectively. This observation
has been confirmed analytically in [6], where the packet
error is shown to depend not only on the offered load of the
interferer system but also on the packet sizes of both systems.
Also note that the probability of loss for the 30% WLAN of-208
GOLMIE ET AL.
Figure 6. Scenario 2. Probability of packet loss for the WLAN mobile.
fered load is slightly higher than for the 60% WLAN offered
load. However, this difference is statistically insignificant.
The significance of the packet size is apparent in figures
5(a) and (c), where short Bluetooth voice packets lead
to less packet loss for Bluetooth but cause more interference
for WLAN. However, for the WLAN 11 Mbit/s rate, the effect
of changing the WLAN packet size over the range 1,000
to 12,000 bits has very little effect on the performance of both
the WLAN and Bluetooth, and that is due to the relatively
short transmission time of the WLAN packet. At the 1 Mbit/s
rate, WLAN packets of the same bit lengths take considerably
longer to transmit, and the effect of packet size is somewhat
more pronounced. For a further discussion of the 1 Mbit/s
case, please see [7].
4.1.3. Bluetooth hop rate
In order to highlight the effect of the Bluetooth hop rate on
WLAN, we use different packet types, DM1, DM3, and DM5;
these packets occupy 1, 3, and 5 time slots, respectively. The
Bluetooth hop rate is determined by the number of time slots
occupied by a packet. Thus, the hop rate is 1600, 533, and
320 hops/s for DM1, DM3, and DM5 packets, respectively.
The offered load for Bluetooth is set to 100%. The results in
table 4 clearly indicate that a faster hop rate leads to higher
packet losses (44%, 28%, and 26% for DM1, DM3 and DM5,
respectively). Note that the results are rather insensitive to the
WLAN offered load.
4.1.4. Bluetooth traffic type
The question here is, whether Bluetooth voice effects WLAN
more than Bluetooth data, and vice versa.
We use three
types of packets for voice encapsulation, namely, HV1, HV2,
and HV3. HV1 represents the worst case of interference for
WLAN as shown in table 5 with 44% packet loss. HV2 and
HV3, which contain less error correction and more user information
, are sent less often and, therefore, interfer less with
WLAN (25% and 16% for HV2 and HV3, respectively). The
Table 4
Scenario 2. Probability of WLAN packet loss versus Bluetooth hop rate.
BT
WLAN
= 30%
WLAN
= 60%
DM1
0.449
0.449
DM3
0.286
0.277
DM5
0.269
0.248
Table 5
Scenario 2. Probability of WLAN packet loss versus Bluetooth traffic type.
BT
WLAN
= 30%
WLAN
= 60%
Voice
HV1
0.446
0.470
HV2
0.253
0.257
HV3
0.166
0.169
Data,
= 60%
0.191
0.177
Table 6
Scenario 1. Probability of Bluetooth packet loss versus Bluetooth traffic
type.
BT
WLAN
= 30%
WLAN
= 60%
Voice
HV1
0.077
0.141
HV2
0.075
0.149
HV3
0.069
0.136
Data,
= 60%
0.2089
0.210
WLAN packet loss with Bluetooth data interference is 19%.
Please note that the results do not depend on the WLAN offered
load.
On the other hand, the probability of packet loss for Bluetooth
data (20%) is higher than for Bluetooth voice (7%) as
shown in table 6. Note that doubling the WLAN offered load
to 60% doubles the Bluetooth voice packet loss. Also, since
all three types of voice packets suffer the same packet loss,
it is preferable to use HV3, which causes less interference
on the WLAN. The error correction coding in HV1 and HV2
packets may provide greater range in a noise-limited environment
, but this coding is far too weak to protect the packets
from interference. Instead, it is the frequency hopping ability
of Bluetooth that limits the damage done by the WLAN.
4.1.5. Bluetooth transmission power
While most Bluetooth devices will be operating at 1 mW, the
specification also allows higher transmitter powers. Table 7
shows the probability of packet loss for both Bluetooth and
the WLAN for three values of the BT transmitter power and
two types of Bluetooth traffic. As expected, higher transmitter
powers lead to more lost WLAN packets, regardless of the
BT traffic type. Increasing the power from 1 to 10 mW leads
to approximately a 50% increase in WLAN loss. Conversely,
the Bluetooth packet error rate decreases. It still not clear how
beneficial this decrease is for Bluetooth; even a loss probability
of 0.0335 may lead to unacceptable voice quality.
4.1.6. Bluetooth packet error correction
So far, the results shown for the Bluetooth data are with DM5
packets, which use a 2/3 block code on the packet payload.
In order to show the effect of error correction on the probabil-INTERFERENCE
EVALUATION OF BLUETOOTH AND IEEE 802.11b SYSTEMS
209
Figure 7. Scenario 1. Probability of packet loss for the Bluetooth slave.
Table 7
Scenario 2. Probability of packet loss versus Bluetooth transmission power
(mW). WLAN
= 60%.
BT traffic
BT power
BT loss
WLAN loss
(mW)
probability
probability
= 60%
1
0.2125
0.0961
2.5
0.2085
0.1227
10
0.1733
0.1358
Voice
1
0.1417
0.1253
2.5
0.1179
0.1609
10
0.0335
0.1977
ity of packet loss, we repeat scenario 1 and compare the results
given in figures 5(a) and 7, obtained with DM5 and DH5
packets, respectively. As expected, the probability of packet
loss for DM5 packets (figure 5(a)) is slightly less than for
DH5 packets (figure 7) for WLAN transmission powers less
than 5 mW. Thus, for low levels of interference, a 2/3 block
code can reduce the probability of loss by 4%. However, for
WLAN transmission powers above 5 mW, the probability of
packet loss is the same for both DM5 and DH5 packets.
4.2. Realistic interference topologies
In this section, we consider two practical interference topologies
. While they appear to be somewhat different, they actually
complement each other. The first one has the WLAN
device, in the midst of the Bluetooth piconets, acting at the
source, while the second one has the WLAN access point acting
as the source.
4.2.1. Topology 2
We first look at the topology illustrated in figure 8. It consists
of one WLAN AP located at (0, 15) m, and one WLAN
mobile at (0, 0) m. The WLAN traffic is generated at the mobile
, while the AP returns acknowledgments. The distance
between the WLAN AP and mobile is d
W
= 15 m. There
Figure 8. Topology 2. Two WLAN devices and ten Bluetooth piconets.
Table 8
Experiment 3 results.
BT traffic
WLAN
BT loss
WLAN loss
d
B
= 1 m
d
B
= 2 m
= 30%
30%
0.056
0.157
0.121
60%
0.060
0.188
0.170
= 60%
30%
0.057
0.243
0.405
60%
0.061
0.247
0.381
Voice
30%
0.009
0.104
1
60%
0.008
0.106
1
are ten Bluetooth piconets randomly placed, covering a disk.
The center of the disk is located at (0, 0) and its radius is
r
= 10 m. We define d
B
as the distance between a Bluetooth
master and slave pair. d
B
= 1 m for half of the master and
slave pairs, while d
B
= 2 m for the other half of the master
and slave pairs.
In this case, the main interference on Bluetooth is caused
by the WLAN source located in the center of the disk; the aggregation
of the ten piconets affects the WLAN source. We
found that when the WLAN system is not operating, the Bluetooth
packet loss is negligible (less than 1%). Table 8 gives
the packet loss for the Bluetooth and WLAN devices. The
packet loss for the Bluetooth devices is averaged over the
master and slave devices and split into two groups: piconets
with d
B
= 1 m and piconets with d
B
= 2 m. For WLAN, the
packet loss is measured at the source. It is effectively zero at
the sink.
We observe that the WLAN packet loss depends on the
Bluetooth traffic load value, . As is varied from 30% to
60%, the WLAN packet loss is significantly changed from
12% to 40%. However, the WLAN packet loss is insensitive
to the WLAN offered load. Consistent with previous results,
Bluetooth voice represents the worst case interference scenario
for WLAN.
210
GOLMIE ET AL.
In general, the Bluetooth packet loss for d
B
= 1 m is less
than for d
B
= 2 m. The reason is that when the Bluetooth
signal is stronger (over a shorter distance), the impact of interference
is less significant.
4.2.2. Topology 3
We next consider the topology given in figure 9. It includes
one WLAN AP and four WLAN mobile devices. The WLAN
AP is located at (0, 15) m, and it is the source of the traffic
generation. The four WLAN mobile devices are placed
on a two-dimensional grid at (
-1, 1), (1, 1), (-1, -1), and
(
1,
-1) m. In this topology, there are four Bluetooth piconets,
each consisting of a masterslave device pair. The placement
of the Bluetooth devices is as shown in the figure.
In this case, we are looking at the effect of Bluetooth piconets
on the four WLAN sink devices. The packet loss measure
for WLAN is averaged over the four devices. As shown
in table 9, the impact of WLAN interference on Bluetooth is
minimal, given that the WLAN source is far from the Bluetooth
piconets. As expected, the WLAN packet loss depends
on the Bluetooth traffic conditions, and it is rather insensitive
to the WLAN traffic activity. With Bluetooth voice, the
WLAN packet loss is close to 84%. It is 57% for Bluetooth
data with WLAN loads of
= 30, 60%.
Concluding remarks
We presented results on the performance of Bluetooth and
WLAN operating in the 2.4 GHz ISM band based on detailed
channel, MAC, and PHY layer models for both systems. The
evaluation framework used allows us to study the impact of
interference in a closed loop environment where two systems
are affecting each other, and explore the MAC and PHY layer
interactions in each system.
We are able to draw some useful conclusions based on our
results. First, we note that power control may have limited
benefits in this environment. Increasing the WLAN transmission
power to even fifty times the power of Bluetooth is not
sufficient to reduce the WLAN packet loss. On the other hand,
limiting the WLAN power, may help avoid interference to
Bluetooth. Second, using a slower hop rate for Bluetooth (i.e.
longer packet sizes) may cause less interference to WLAN.
Third, Bluetooth voice represents the worst type of interference
for WLAN. In addition, the WLAN performance seems
to degrade as the Bluetooth offered load is increased. Finally,
the use of error correcting block codes in the Bluetooth payload
does not improve performance. The errors caused by
interference are often too many to correct.
Overall, the results are dependent on the traffic distribution
. Yet, there may be little room for parameter optimization
especially for the practical scenarios. Not only does the complexity
of the interactions and the number of parameters to adjust
make the optimization problem intractable, but choosing
an objective function is very dependent on the applications
and the scenario. Thus, achieving acceptable performance for
Figure 9. Topology 3. Five WLAN devices and four Bluetooth piconets.
Table 9
Experiment 4 results.
BT traffic
WLAN
BT loss
WLAN loss
= 30%
30%
0.007
0.574
60%
0.006
0.580
= 60%
30%
0.007
0.576
60%
0.006
0.580
Voice
30%
0.002
0.836
60%
0.001
0.828
a particular system comes at the expense of the other system's
throughput. Therefore, we believe that the primary solutions
to this problem lie in the development of coexistence mechanisms
References
[1] BlueHoc: Bluetooth Performance Evaluation Tool, Open-Source
(2001) http://oss.software.ibm.com/developerworks/
opensource/~bluehoc
[2] Bluetooth Special Interest Group, Specifications of the Bluetooth system
, Vol. 1, v.1.0B Core, and Vol. 2, v1.0B Profiles (December 1999).
[3] T. Ekvetchavit and Z. Zvonar, Performance of phase-locked loop receiver
in digital FM systems, in: Ninth IEEE International Symposium
on Personal, Indoor and Mobile Radio Communications, Vol. 1 (1998)
pp. 381385.
[4] G. Ennis, Impact of Bluetooth on 802.11 direct sequence, IEEE
P802.11 Working Group Contribution, IEEE P802.11-98/319 (September
1998).
[5] D. Fumolari, Link performance of an embedded Bluetooth personal
area network, in: Proceedings of IEEE ICC'01, Helsinki, Finland (June
2001).
[6] N. Golmie and F. Mouveaux, Interference in the 2.4 GHz ISM band:
Impact on the Bluetooth access control performance, in: Proceedings
of IEEE ICC'01, Helsinki, Finland (June 2001).
INTERFERENCE EVALUATION OF BLUETOOTH AND IEEE 802.11b SYSTEMS
211
[7] N. Golmie, R.E. Van Dyck, and A. Soltanian, Interference of Bluetooth
and IEEE 802.11: Simulation modeling and performance evaluation
, in: Proceedings of the Fourth ACM International Workshop on
Modeling, Analysis, and Simulation of Wireless and Mobile Systems,
MSWIM'01, Rome, Italy (July 2001).
[8] I. Howitt, V. Mitter and J. Gutierrez, Empirical study for IEEE 802.11
and Bluetooth interoperability, in: Proceedings of IEEE Vehicular
Technology Conference (VTC) (Spring 2001).
[9] IEEE Standard 802-11, IEEE standard for wireless LAN Medium Access
Control (MAC) and Physical Layer (PHY) specification (June
1997).
[10] A. Kamerman, Coexistence between Bluetooth and IEEE 802.11 CCK:
Solutions to avoid mutual interference, IEEE P802.11 Working Group
Contribution, IEEE P802.11-00/162r0 (July 2000).
[11] A. Kamerman and N. Erkocevic, Microwave oven interference on wireless
LANs operating in the 2.4 GHz ISM band, in: Proceedings of the
8th IEEE International Symposium on Personal, Indoor and Mobile
Radio Communications, Vol. 3 (1997) pp. 12211227.
[12] J. Lansford, A. Stephens and R. Nevo, Wi-Fi (802.11b) and Bluetooth:
Enabling coexistence, IEEE Network Magazine (September/October
2001).
[13] S. Shellhammer, Packet error rate of an IEEE 802.11 WLAN in the
presence of Bluetooth, IEEE P802.15 Working Group Contribution,
IEEE P802.15-00/133r0 (May 2000).
[14] M.K. Simon and C.C. Wang, Differential versus limiter-discriminator
detection of narrow-band FM, IEEE Transactions on Communications
COM-31(11) (November 1983) 12271234.
[15] A. Soltanian and R.E. Van Dyck, Physical layer performance for coexistence
of Bluetooth and IEEE 802.11b, in: Virginia Tech. Symposium
on Wireless Personal Communications (June 2001).
[16] M. Takai, R. Bagrodia, A. Lee and M. Gerla, Impact of channel models
on simulation of large scale wireless networks, in: Proceedings of
ACM/IEEE MSWIM'99, Seattle, WA (August 1999).
[17] S. Unawong, S. Miyamoto and N. Morinaga, Techniques to improve the
performance of wireless LAN under ISM interference environments, in:
Fifth Asia-Pacific Conference on Communications, 1999 and Fourth
Optoelectronics and Communications Conference, Vol. 1 (1999) pp.
802805.
[18] J. Zyren, Reliability of IEEE 802.11 WLANs in presence of Bluetooth
radios, IEEE P802.11 Working Group Contribution, IEEE P802.15-99/073r0
(September 1999).
[19] S. Zurbes, W. Stahl, K. Matheus and J. Haartsen, Radio network performance
of Bluetooth, in: Proceedings of IEEE International Conference
on Communications, ICC 2000, New Orleans, LA, Vol. 3 (June 2000)
pp. 15631567.
Nada Golmie received the M.S.E degree in computer
engineering from Syracuse University, Syracuse
, NY, in 1993, and the Ph.D. degree in computer
science from University of Maryland, College Park,
MD, in 2002. Since 1993, she has been a research
engineer at the advanced networking technologies
division at the National Institute of Standards and
Technology (NIST). Her research in traffic management
and flow control led to several papers presented
at professional conferences, journals and numerous
contributions to international standard organizations and industry led consor-tia
. Her current work is focused on the performance evaluation of protocols
for Wireless Personal Area Networks. Her research interests include modeling
and performance analysis of network protocols, media access control,
and Quality of Service for IP and wireless network technologies. She is the
vice-chair of the IEEE 802.15 Coexistence Task Group.
E-mail: [email protected]
Robert E. Van Dyck received the B.E and M.E.E
degrees from Stevens Institute of Technology, Hoboken
, NJ, in 1985 and 1986, respectively, and the
Ph.D. degree in electrical engineering from the North
Carolina State University at Raleigh in 1992. Since
June 2000, he has been a member of the Advanced
Network Technologies Division of the National Institute
of Standards and Technology, Gaithersburg,
MD. Prior to that, he was an Assistant Professor in
the Department of Electrical Engineering, the Pennsylvania
State University, University Park, PA. During 1999, he was a Summer
Faculty Research Fellow at Rome Laboratory. His other previous affiliations
include GEC-Marconi Electronic Systems, Wayne, NJ (19951996),
the Center for Computer Aids for Industrial Productivity, Rutgers University,
Piscataway, NJ (19921995), the Computer Science Corporation, Research
Triangle Park NC, (1989), and the Communications Laboratory, Raytheon
Co., Marlborough, MA (19851988). His present research interests are in
self-organization of sensor networks, multimedia communications and networking
, and source and channel coding for wireless communications.
Amir Soltanian received his M.S. degree from
Sharif University of Technology, Tehran, Iran, in
1994. He has been working in the industry for 6
years doing research on GSM receivers. Currently,
he is a guest researcher at National Institute of Standards
and Technology. His current research is the
study of the interference cancellation methods for
the physical layer of the Bluetooth and IEEE802.11
WLAN.
Arnaud Tonnerre is a graduate student at the
cole Nationale Suprieure des Telecommunications
(ENST) in Bretagne, France. He is currently doing
an internship at the National Institute of Standards
and Technology (NIST) in Gaithersburg, MD. He
will receive the Diplome d'Ingenieur in June 2003.
His research interests are in wireless personal area
networks.
Olivier Rbala received a computer science degree
from the Institut suprieur d'informatique, de
modlisation et de leurs applications (ISIMA) in
Clermont-Ferrand, France, in September 2001. He
is currently a Guest Researcher at the National Institute
of Standards and Technology (NIST) in the
advanced networking technologies division. His research
interests includes the performance evaluation
of wireless networking protocols. | evaluation;packet loss;performance degradation;IEEE 802.11b;simulation framework;Bluetooth;interference;hop rate;tranmission power;topology;WPANs;WLAN;offered load |
119 | Is a Picture Worth a Thousand Words? | What makes a peripheral or ambient display more effective at presenting awareness information than another ? Presently, little is known in this regard and techniques for evaluating these types of displays are just beginning to be developed. In this article, we focus on one aspect of a peripheral display's effectiveness-its ability to communicate information at a glance. We conducted an evaluation of the InfoCanvas, a peripheral display that conveys awareness information graphically as a form of information art, by assessing how well people recall information when it is presented for a brief period of time. We compare performance of the InfoCanvas to two other electronic information displays , a Web portal style and a text-based display, when each display was viewed for a short period of time. We found that participants noted and recalled significantly more information when presented by the InfoCanvas than by either of the other displays despite having to learn the additional graphical representations employed by the InfoCanvas. | Introduction
The Peripheral awareness displays are systems that reside in
a user's environment within the periphery of the user's
attention. As such, the purpose of these displays is not
for monitoring vital tasks. Rather, peripheral displays
best serve as communication media that people can
opportunistically examine to maintain information
awareness [11, 17].
The
term
ambient display [22] has been used to describe
systems like this as well, but to avoid confusion,
throughout this document we use this term to describe
peripheral awareness systems that generally convey
only one piece of information. We use the term peripheral
display to describe peripheral awareness systems
that may present multiple information items. Both peripheral
and ambient displays are designed not to distract
people from their tasks at hand, but to be subtle,
calm reminders that can be occasionally noticed. In
addition to presenting information, the displays also
frequently contribute to the aesthetics of the locale in
which they are deployed [1].
Dozens of peripheral/ambient displays have been
created in many shapes and form factors. Some displays
, such as the dangling string [21], tangible displays
including water lamps and pinwheels [4], and the Information
Percolator [7] have utilized physical (and
often everyday) objects. Other displays, such as Informative
Artwork [8] and the Digital Family Portrait [16]
use electronic displays to represent information in a
graphical manner. All these systems primarily communicate
one item of information.
Other peripheral/ambient displays exist that are capable
of conveying more than one information item
simultaneously. The Digital Family Portrait, although
primarily intended to allow geographically separated
family members maintain awareness of each other, allows
for the optional displaying of additional information
such as weather [16]. Audio cues, instead of visual
displays, have also been utilized in peripheral displays
to convey multiple nuggets of information in the Audio
Aura system [15]. The Kandinsky system [5] attempts
to create artistic collages of various pieces of information
, and the Scope system is an abstract visualization
displaying notification information from multiple
sources [19]. SideShow [3] provides a display sidebar
containing multiple awareness icons such as traffic and
weather indicators.
The InfoCanvas [14], the focus of this article, differs
from the initial set of systems above by explicitly
promoting the conveyance of multiple pieces of information
concurrently. It differs from the latter set of
Copyright is held by the author/owner originally published by the
Canadian Human-Computer Communications Society in the
Proceedings of Graphics Interface 2004, May 17-19, London, Ontario.
117
systems in promoting greater flexibility of information
monitored and its subsequent visual representation, as
well as allowing for greater user control in specifying
those mappings.
Although many types of displays exist and new ones
are being developed, little is known about what makes a
particular peripheral/ambient display more successful at
presenting information than another [10]. Furthermore,
such displays are inherently difficult to evaluate formally
since they are designed not to distract the user.
As a result, evaluation techniques have been limited, as
Mankoff et al. note [10], to formative ethnographies
[16] and within-lab studies where displays are developed
and subsequently refined over time by their designers
[6]. However, there has been recent work on
developing new evaluation techniques for ambient displays
, most notably Mankoff et al.'s set of discount
formative techniques [10] and McCrickard et al.'s notification
system categorization framework [13].
The goal of this study is not to evaluate peripheral
displays in general. Rather, we focus on one particular
component of a peripheral display's effectiveness, its
ability to communicate information. More specifically,
we examine how the abstract data mappings of electronic
information artwork affect people's interpretation
and memory of the data.
Both the InfoCanvas [14] and the Informative Artwork
[8] projects make use of dynamic pieces of electronic
artwork to represent information in an eye-appealing
manner. Such displays are placed within a
person's work environment or are publicly displayed,
enabling at-a-glance information awareness. How well
the systems convey information is not known, however.
Note that the success of a peripheral/ambient display
involves more than simple information acquisition.
Because these displays are positioned in people's environments
, aesthetics and attractiveness influence adop-tion
as well. The research reported here, though, focuses
solely on such displays' ability to convey information
. In a companion study, the issues of aesthetics
and longer-term use of the InfoCanvas system are currently
being explored.
Experimental Design
This study examines if an electronic picture "is worth a
thousand words." That is, how well are users able to
learn mappings and subsequently comprehend and recall
information when it is presented in the form of
electronic artwork in comparison to more traditional
methods. We accomplish this by designing an InfoCanvas
display as well as two more conventional information
displays and evaluating participants' memories
of them when they only see the displays for short
periods of time.
Study participants viewed three examples of each
display with each example encoding different data values
(described in detail in the next section). After
viewing a display for eight seconds, participants recalled
the information presented using a multiple-choice
questionnaire.
2.1 Materials
Ten items of information were selected to be monitored
: time of day, a weather forecast, a temperature
forecast, traffic conditions, a news headline, the Dow
Jones stock index value, an airfare price, updates to a
Web site, a count of new emails, and a baseball score.
These items are examples of information people typically
seek to maintain awareness of [14].
Three information screens were designed including
an InfoCanvas beach scene, a minimalist text-based
display, and a Web portal-like display. These three
displays were chosen to represent interesting points in a
spectrum of possibilities, as depicted in Figure 1, for
representing awareness information on electronic ambient
displays. Styles range from pure textual presentations
to highly abstract, graphical imagery. The InfoCanvas
and the Text-based display inhabit positions
near the endpoints of that spectrum. The Web Portal
display was designed to incorporate a hybrid of textural
and graphical representations, and resemble the types of
Web "start pages" that people frequently use to maintain
information awareness today [14].
Other interesting points in the spectrum include
more direct graphical (typically iconic) representations
of information as embodied by systems such as Sideshow
[3], and could be the subject of future experiments
. For this study, we compare the InfoCanvas to
two widely deployed types, Web portals (e.g. MyYahoo
!) and text-heavy news summaries or Web pages.
Highly Textual
InfoCanvas [14]
Sideshow [3]
Web Portal
My Yahoo!
Text-Based
Informative Artwork [6]
Highly Graphical
Figure 1: A spectrum of awareness displays ranging
from textual to graphical presentations of information.
118
All three displays in the study were designed seeking
a balance of experimental control and representation
of ecologically valid real-world use. Extensive
pilot testing and redesign was used to refine their appearance
. We designed the three displays to encode the
ten pieces of information in an appropriate manner for
that display style. In all three, we added a small
amount of extra information beyond the ten queried
information values, much as similar real world displays
would undoubtedly do.
All displays were presented full-screen on a
Viewsonic 15" LCD display running at a resolution of
1024 x 768. The InfoCanvas used the entire screen
area, and the other two displays used slightly less of the
entire display as will be explained below. In the following
subsections, we describe each of the displays in
more detail.
InfoCanvas Display
The InfoCanvas system supports a variety of artistic
scenes or themes. We chose to use a beach scene as
shown in Figure 2 for the experiment due to its popularity
with trial users. Individual objects in the scene represented
the ten data values as follows:
Airfare price: Represented by the vertical height
of the kite in the sky from $0 (near the water
level) to $400 (top of the screen).
News headline: Shown on the banner behind the
plane.
Time of day: Denoted by the sailboat moving from
the left side (12:01 AM) to the right side (11:59
PM).
Web site update: Represented by the color of the
leaves on the palm tree, green indicates a recent
update and brown indicates no recent changes.
Weather forecast: Illustrated through the actual
weather shown in the sky (e.g., clouds represents
a forecast of cloudy weather).
Temperature forecast: Represented by the height
of the large seagull in the sky, ranging from 50
degrees at water level to 90 degrees at the top of
the screen.
Dow Jones stock market change: Displayed by
the arrangement of seashells on the shoreline.
Shells form an arrow to indicate whether stocks
are up or down and the quantity of shells indicates
the value (three shells indicate a change of
0 50 points, five shells indicate a change of
more than 50 points).
New email messages: Depicted by the height of
liquid in the glass ranging from 0 new emails
(empty glass) to 20 new emails (a full glass).
Current traffic speed on a local roadway: Sym-bolized
by the color of the woman's bathing suit
with red indicating speed less than 25 MPH, yellow
indicating a speed between 25 and 50 MPH,
and green indicating a speed greater than 50
MPH.
Baseball score: Shown by the size of two beach
balls: A larger ball indicates a winning team and
identical ball sizes indicate a tied score. Color is
used to distinguish the two teams.
These mappings were chosen to reflect a variety of
objects moving or changing size or color. In addition,
some mappings were chosen for being more intuitive
and direct, such as using weather icons to represent
weather or the metaphor of a kite flying in the sky to
reflect airfare price. Other mappings, such as representing
updates to a Website by tree leaf color, were intended
to be more abstract and indirect. A pilot study
of four InfoCanvas users revealed a wide variety of
mapping styles, both natural and abstract. As a result,
we wanted the scene used in this study to reflect this.
Furthermore, as also done in actual use, we placed additional
items in the scene such as the chair, umbrella,
and crab simply for aesthetic purposes.
Several items within this display present information
as a precise point along a continuous scale, including
the time-of-day, airfare, and forecasted temperature,
by displaying objects that move along a line. Other
items, including the traffic speed, stock update, and
baseball score, are represented using categorical encod-ings
. For example, the different shell arrangements
representing the Dow Jones stock update indicate four
different ranges of values. The implications of this difference
will be explored more fully later when describing
the questionnaire formats.
Text-Based Display
The Text-based display (shown in Figure 2) predomi-nantly
uses text to display information. Web pages
such as MyYahoo were the inspiration for the Text-based
display, but the use of images, different colors,
and graphics were removed. Thus, the display represents
a position near the endpoint of the graphics-text
spectrum presented earlier.
As a result, we restricted formatting on this display
to changes in point size and the use of bold text with the
exception of using a fixed-width font to indicate stock
change values. (The fixed-width font helps to align numerical
stock values, providing a clean and orderly appearance
similar to the style used by existing services.)
Extra information beyond the ten data values on this
display included a few lines from a news article related
to the current headline, the current date, and additional
stock information for the Standard & Poor's 500 and
119
Figure 2: Examples of the InfoCanvas beach scene (top), text-based (middle), and Web Portal displays (bottom)
used in the study.
120
NASDAQ indices--items likely to appear on such a
display.
The Text-based display consisted of a region 970
pixels wide to 330 pixels high on the screen. Pilot testing
found this size optimal in allowing the use of columns
, section headers, and white space to make an effective
and visually pleasing display. Furthermore,
pilot testing indicated that information recall suffered as
the display's size increased, perhaps due to increased
eye movement, even though the data elements remained
located in the same position.
Web Portal Display
The Web Portal display (shown in Figure 2) also mim-icked
the look and feel of popular no-cost "start" Web
pages such as My Yahoo. However, we added additional
formatting and iconic graphics/images as found
in awareness displays such as Sideshow [3] to differentiate
this display from the Text-based display. Web
portals, in actuality, tend to make relatively limited use
of images and graphics. Our introduction of graphics
and images served two main purposes--making the
display more of a hybrid between the highly artistic
InfoCanvas and a display utilizing only text, and also to
increase the effectiveness of the design by using graphics
to position items or convey information.
Graphics that encode values--those that change to
reflect information--in the Web Portal display include
the weather icon indicating the weather forecast, the
speedometer icon with a meter indicating the current
speed of vehicles, and an icon indicating the presence
of new email messages. In addition, an image related to
the news headline was displayed. Iconic images that
did not change and were used solely as positional anchors
included a picture frame icon for the Web site
update item, baseball team logos, and an airline logo.
In addition, colors and arrows were used to indicate
stock trends and the baseball team currently winning
was displayed in bold text.
The Web Portal display's extra information (e.g. not
encoding the ten queried values) included a few lines of
a news story related to the headline and the current
date, and the two other stock indices as done in the
Text-based display.
The Web Portal display used an area of 968 pixels
wide by 386 pixels high on the display. Again, iterative
development and pilot testing helped determine this
size was best to create a balanced and ordered layout
and be an effective presenter of information. As in the
Text-based display, each element on the Web Portal
display remained in the same relative position.
Design Considerations
As noted above, wherever we faced a design choice in
creating the Web Portal and Text-based displays, we
attempted to optimize the display to promote comprehension
. For example, both the Web Portal and Text-based
displays represent substantial improvements over
real-life examples. The Web Portal design contained
more graphics and images than what typically appears
on these Web pages. Pilot subjects found these graphics
and images to be beneficial in remembering information
. Furthermore, individual items were modified
during pilot testing to assist recall. For example, we
made the size of the weather forecast image substan-tially
larger than what is typically found on Web portals
.
Likewise, we designed the Text-based display to be
a substantial improvement over existing text-based information
displays, such as tickers or small desktop
window applications, by introducing columns, section
headers, and white space.
Initial full-screen presentations used for the Web
Portal and Text-based display tended to look unwieldy
and resulted in lower recall of information during pilot
testing. We attributed this to the larger screen area that
participants had to visually parse. Hence, we reduced
the screen area occupied by those displays to promote
comprehension. Following that logic, InfoCanvas' larger
size should have served to negatively impact its
performance, if anything.
2.2 Participants
Forty-nine (11 female) individuals with normal or corrected
-to-normal eyesight participated in this study.
Participants ranged from 18 to 61 years of age (mean
24.2). 27 were graduate students, 17 were undergraduates
, and 5 were non-students. Participants were com-pensated
$10 for their time.
2.3 Procedure
Testing occurred in individual sessions lasting approximately
45 minutes. Participants sat two feet in
front of the LCD monitor. The keyboard and mouse
were removed from the area, leaving empty desk space
between the participant and the display. The experi-menter
informed participants that they were participating
in a study to determine how much they could remember
from different information screens when they
could only see the screen for a brief amount of time.
A within-subjects experimental design was used and
the ordering of the display conditions was counterbal-anced
. Participants were randomly assigned to an ordering
sequence. For each of the three displays, an in-121
troductory tour, preparation task, and practice task were
given prior to performing three actual trials.
The introductory session included an explanation of
the display and the information found on it. For the
InfoCanvas and Web Portal displays, the behaviors of
the elements on the displays were also explained. Due
to the display's more complex and dynamic nature, the
introductory tour took longer to perform with the InfoCanvas
, approximately 3.5 minutes in duration, than
with the Web Portal and Text-based display, both approximately
1.5 minutes in duration.
Initially, especially with InfoCanvas, we had concerns
that the introductory tour might not be sufficient
to allow participants to learn each display. Pilot testing,
however, revealed that participants were able to quickly
learn the information mappings. To further ensure that
we would be testing information comprehension and
recall but not mapping recall with respect to the InfoCanvas
, participants were asked to point out the different
objects on a sample display and say aloud what information
each object represented. We also provided
participants with a reference sheet labeling the mappings
between information and objects on the InfoCanvas
. In practice, we found that participants seldom
looked at the sheet and some actually turned it over.
During the preparation task, participants were
shown an example display and instructed to complete a
sample recall questionnaire (explained in more detail
later in this section), much as they would in the actual
trials. In this phase, however, no time limit was en-forced
for viewing the display. This task then allowed
the participant to better familiarize him or herself with
the display, the questionnaire style, and to ask additional
questions regarding the display, all while it was
visible.
Next, in the practice task, participants were exposed
to what the actual trials would be like. A recall questionnaire
was placed text side down in front of the participant
and then an information display was shown for
eight seconds. Pilot testing determined that this was a
suitable amount of exposure time to avoid ceiling or
floor effects, with recall averaging about five or six
items. Furthermore, participants during pilot testing
felt that this amount of time was indicative of the duration
of a glance of a person seeking multiple information
updates. Upon completion of the exposure, the
computer prompted the individual to turn over and
complete the recall sheet. Participants were instructed
to not guess on the recall questionnaire; if the participant
did not remember an item at all, he or she left that
item blank on the questionnaire.
The actual trials followed the practice task and consisted
of three exposure and recall activities involving
different data sets and hence data displays. Again, specific
emphasis was made to discourage the participant
from guessing on the recall. The same data values were
used for each position of the nine total experiment trials
independent of the display ordering, ensuring a balance
across the experiment.
Upon completion of the three different display conditions
, participants were given several concluding surveys
that captured subjective feedback from the participants
regarding perceived performance and display
preferences.
Recall Task
Ten questions, one per each information item, were
presented to participants after exposure to an information
display. We varied the question topic order across
trials to discourage participants from becoming accustomed
to a particular topic being the subject of the first
few questions and then seeking out information from
the displays on those topics. While participants were
not explicitly informed of this, the varied order came as
no surprise when they performed actual trials since they
had already encountered the recall sheet in the preparation
and practice tasks.
To minimize cognitive load, the questions were
designed to elicit the comprehension and recall of information
in the same manner that it had been encoded.
For all questions about the Text and Web Portal displays
, and for the majority of questions about the InfoCanvas
display, the question style was multiple-choice,
typically including four exact-value answers spread
relatively evenly across the range of possible answers.
For instance, the potential answers for the time of day
might have been 3:42am, 8:36am, 5:09pm, and
10:11pm. The newspaper headline question used four
possible answers containing some similarity (usually
using the same key words such as "Iraq" or "President
Bush") to ensure the recall of the headline by context,
not by recognition of a key word. The Web site update
question simply asked whether the site had been up-dated
, with yes and no as the possible answers. Finally,
the baseball score question asked which team was currently
winning and offered the choices of the Braves,
Pirates, or tied game. The data values used to generate
displays for the nine trials also were chosen to range
across the possible set of values.
"Exact Value"
"Categorical"
What is the status of the
Dow Jones?
What is the status of the
Dow Jones?
+ 89 points
+ 42 points
- 2 points
- 75 points
Up over 50 points
Up 0 50 points
Down 0 50 points
Down over 50 points
Figure 3: Example of exact value and categorical
recall questions.
122
For topics that the InfoCanvas presented categories
or ranges of values (e.g., traffic conditions, baseball
score, and stock updates), answer choices to the recall
questions were also presented in the form of ranges.
Figure 3 shows an example of how these differed using
stocks as an example. Note how the exact-value answers
lie within the intervals used; the questions and
answers were designed to be as similar as possible.
Furthermore, we felt that the more general issue of participants
needing to translate pictures into exact, usually
numeric, values would counter any benefit received by
the InfoCanvas in using ranges for a few questions.
Adjacent to each multiple-choice question on the
recall questionnaire was a confidence level scale with
choices for high, medium, or low confidence. Participants
were instructed to indicate their relative confidence
for each item. We did this to further lessen the
"guessing factor" and identify whether confidence
would play a measure.
Following the nine cumulative trials for all three
displays, participants completed a Likert scale survey
rating all the displays for facilitating the recall of information
, being an effective presenter of information,
and visual appeal. In addition, participants rank-ordered
each display for facilitating recall and visual
appeal. Lastly, participants responded to open-ended
questions regarding which display they would employ
at their workstation or on a wall if a dedicated display
would be available.
Results
Table 1 presents the means and standard deviations
across all conditions of the raw number of correct responses
for each of the three trials under each display.
A repeated measures ANOVA identified an overall
effect of the display for accurately recalled items,
F(2,96) = 22.21, MSE = 2.31, p < .0001, and there was
no effect for order. Additionally, pair-wise comparisons
between display types found an advantage of the
InfoCanvas display over the Web Portal, F(1,48) =
14.65, MSE = 2.66, p < .0005), the Web Portal over the
Text-based display, F(1,48) = 8.17, MSE = 1.76, p <
.007), and the InfoCanvas over the Text-based display,
F(1,48) = 40.01, MSE =2.51, p < .0001).
To take into account participants' confidence of
their answers, a second method to evaluate performance
was developed. Weights of value 3, 2, and 1 were assigned
for the high, medium, and low confidence levels,
respectively (e.g. a correct answer with medium confidence
yielded +2 points, while an incorrect answer also
with a medium confidence yielded 2 points). Questions
not answered on the recall task were assigned a
weighted score value of 0.
Participants forgot to assign a confidence on 13 of
the 4410 responses collected in the study. Since this
number of accidental omissions was quite low, items
with omitted confidence ratings were assigned a medium
level, the median of the obtainable point values.
Of the 13 questions with omitted confidence, 3 were
answered incorrectly.
In examining the weighted scores shown in Table 2,
an overall effect was found on the display, F(2,96) =
10.40, MSE = 25.35, p < .001, and again there was no
effect of order. Furthermore, pair-wise comparisons
between the displays again found an advantage of the
InfoCanvas display over the Web Portal, F(1,48) =
7.29, MSE = 30.56, p = .0095, and of the InfoCanvas
display over the Text-based display, F(1,48) = 22.21,
MSE = 22.93, p < .0001. However, the weighted scores
gave no advantage of the Web Portal over the Text-based
display, F(1,48) = 2.59, MSE = 2.51, p = 0.11.
Figure 3 presents an item-by-item breakdown of the
percentage of correctly answered questions for each
display. The InfoCanvas had the highest average on
Ease of Info. Recall
1
2
3
4
5
Mean
Text-Based 7
18
14
10
0
2.6 (1.0)
Web Portal
1
8
18
17
5
3.3 (0.9)
InfoCanvas 2
4
13
20
10
3.7 (1.0)
Effective Data Pres.
1 2 3 4 5 Mean
Text-Based 6
18
16
7
2
2.6 (1.0)
Web Portal
2
3
14
24
6
3.6 (0.9)
InfoCanvas 5
9
13
18
4
3.1 (1.1)
Visual Appeal
1 2 3 4 5 Mean
Text-Based 20
19
8
2
0
1.8 (0.9)
Web Portal
1
2
12
22
12
3.9 (0.9)
InfoCanvas 1
1
10
17
20
4.1 (0.9)
Table 3: Likert scale responses for display characteristics
, with 1 = low rating and 5 = high rating.
1st Trial
2nd Trial
3rd Trial
Text-Based
5.14 (1.59)
5.12 (1.33)
5.02 (1.57)
Web Portal
5.67 (1.61)
5.65 (1.54)
5.29 (1.89)
InfoCanvas
6.27 (1.80)
6.22 (1.79)
6.31 (1.76)
Table 1: Means and standard deviations of correct
responses for three trials of each display.
1st Trial
2nd Trial
3rd Trial
Text-Based
11.47 (4.92)
11.78 (4.81)
10.57 (5.02)
Web Portal
12.88 (5.09)
12.35 (5.84)
11.27 (6.40)
InfoCanvas
13.88 (5.96)
14.02 (5.89)
13.82 (6.63)
Table 2. Means and standard deviations of correct
responses for weighted scores for three trials of each
display
123
seven of the ten items. The Web Portal score was
higher on the time and baseball items, and the Text display
was best for the airfare price.
Table 3 contains a breakdown of participants' Likert
ratings captured during the post-experiment surveys.
These results mirror the performance data with the InfoCanvas
generally being rated higher with the exception
that participants generally ranked the Web Portal
higher as being a more effective presenter of data.
Participants' order rankings of the three displays for
facilitation of recall and personal preference are shown
in Table 4. Here, the Text-based display fared poorly
along both dimensions. More participants preferred the
Web Portal but rated the InfoCanvas as best for recall.
Discussion
Participants in the study recalled information best using
the InfoCanvas display despite having the greater cognitive
load of remembering mappings and representations
used in the art paradigm. This cognitive load also
includes translating pictorial InfoCanvas objects to the
values used in the recall questions, while the two other
displays presented data values more closely to the format
of the questions. Even with these disadvantages,
the InfoCanvas conveyed information better and was
more vividly recalled.
Another possible interpretation is that the InfoCanvas
system actually reduces the cognitive load of the
individuals. In this scenario, it follows that it is easier
and cognitively more efficient to remember and recall
the InfoCanvas images, and then translate later to the
values desired.
Regardless of their cognitive interpretation, the
study's results should not be too surprising. People are
able to process images rapidly by leveraging the sophisticated
, parallel, and high-bandwidth nature of the perceptual
system [20]. Umanath and Scamell showed that
graphics are conducive towards recall tasks involving
simple fact retrieval in a series of studies investigating
the role of recall in real-time decision-making [18].
Furthermore, "ecological" layouts with objects in natural
positions have been shown to facilitate faster browsing
[2]. This study, however, confirms our intuition
that the InfoCanvas, and displays like it, has potential to
be an effective peripheral display where people seek to
obtain information at a glance.
Several interesting observations emerged from the
results of this study. We noted that participants generally
expressed preference for the Web Portal display
over the InfoCanvas display even though they felt that
the InfoCanvas display had best facilitated the recall of
information. When asked about this preference, one
participant remarked that the Web Portal design was
"more professional looking" and "more common than
the other two." Other participants praised the Web Portal
for its ability to display information in a more "logi-cal
and precise" manner and providing "accurate information
that is not influenced by my interpretation."
These comments seem to imply a conservative attitude
about adopting a new and unconventional technology
such as an ambient display
Other participants appeared to capture the essence
of peripheral/ambient displays and their abilities to be
subtle communication channels, not distracting a user.
One participant remarked that, "I think I could choose
to ignore it [InfoCanvas] while I was working. I think
once I got used to what all the icons meant and what the
scales were, I could easily look at it to see the information
I was interested in." Others also echoed this sentiment
: "[InfoCanvas] is the quickest and easiest to see
at a glance the information you want" and "[InfoCanvas
] is informative but also relaxing." Finally, one participant
summarized the benefits of the InfoCanvas as
being "able to keep working and not get distracted by
details; [InfoCanvas is] faster to see and interpret from
a distance."
In the context of this study, the InfoCanvas was
evaluated on its abilities as an information purveyor.
The mappings between information and graphical elements
used in this study were designed by the authors,
and as such, did not always feel instinctive to participants
. Some participants indicated they had difficulty
in learning the mappings; one participant remarked that
"I struggled with the visual mappings" and another felt
that InfoCanvas was "counterintuitive." As was mentioned
earlier, this was a concern in the design of the
0%
20%
40%
60%
80%
100%
Web Site Update
Weather Forecast
Traffic Conditions
Time of Day
Stock Updates
News Headline
New Email
Forecasted Temp
Baseball Score
Airfare
Text
Web
InfoCanvas
Figure 3. Mean percentage for correctly recalled
items for each display type
124
study--would individuals even be able to learn these
mappings in such a short period of time? Pilot studies
and the final study data both indicated that despite not
being able to define their own mappings for the information
, participants were able to recall more information
when presented on the InfoCanvas.
A crucial implication lies in this; the InfoCanvas is
designed to be a highly personalized peripheral display
where users specify their own mappings and layouts.
Since participants were able to recall information quite
well when they did not specify the mappings, it seems
logical to conclude that comprehension and recall
would benefit even more when people design their own
display and it is constantly present in their environment.
Several interesting discussion points arise from the
breakdown of correctly recalled items shown in Figure
3. First, note that on the whole, the InfoCanvas yielded
the largest percentage of correctly recalled items per
category, with the exception of the airfare, time of day,
and baseball score items. However, performance of the
three displays on the baseball score item was comparable
, averaging a recall rate of 64-70%. In regards to the
airfare and time of day items, the InfoCanvas produced
the second best percentage of correctly recalled items
and was outperformed by the Text and Web Portal displays
, respectively. Slightly lower performance was
somewhat expected with these two items, since their
representations moved along a straight line to indicate a
point on a scale. Pilot participants often remarked that
these representations were more difficult to keep track
of since they could be found in different areas. Interestingly
, even with these representations, the InfoCanvas
performed better than the Web Portal (for the airfare
item) and the Text display (for the time item), indicating
that despite their moving nature, graphical representations
still worked relatively well. The temperature
element, also represented by a moving object, illustrates
this point as well, generating a higher recall than the
other displays.
Interestingly, the InfoCanvas appeared to have the
largest advantage over the other two displays with the
traffic conditions item. While some may argue that this
is due to the use of intervals to represent conditions, as
opposed to the exact-value representations on the Web
Portal and Text displays, note that the use of intervals
for the baseball score did not yield such an effect. This
difference implies that the representation used to indicate
traffic conditions--the color of the woman's bathing
suit--provided an excellent mapping. Therefore,
we speculate that if individuals create their own mappings
, leveraging their personal experiences, recall with
InfoCanvas will benefit even more.
This study examined the information conveyance
abilities of three specific examples of displays involving
a sample population consisting mainly of academic-related
, relatively young individuals. Generalizing its
findings too much would be unwise. Nevertheless, we
speculate that the results would extend to other similar
types of displays and people of different demographics.
The lessons learned from this study could be applied
to the design of new information systems. For example
, in designing a system using a docked PDA as an
information display, a graphical representation of information
, such as using a miniature InfoCanvas, might
convey information more effectively than a traditional
text-based manner.
Conclusion and Future Work
In this paper, we present a formal evaluation of information
recall from three different electronic information
displays, the InfoCanvas, a Web Portal-like, and a
Text-based display. We present results indicating that
participants comprehended and recalled more awareness
information when it was represented in graphical
manners; participants recalled more information from
the InfoCanvas display than the Web Portal and Text-based
displays. Likewise, participants recalled more
information from the Web Portal display than the Text-based
display. Our results suggest that there are benefits
for comprehension, when a person may only glance
at a display for a short period of time, by displaying
information in a highly graphical or stylized nature.
A number of potential directions for follow-on work
exist. It would be interesting to compare a more abstract
graphical presentation of information as embodied
by the InfoCanvas with a purely graphical, but more
direct iconic encoding, such as in Sideshow [3].
In this study, we positioned the information displays
directly in front of participants. Another possible experiment
could position the display further away, perhaps
on a neighboring wall, from the person's main
computer display. Yet another possibility is to introduce
an explicit primary task thus making information
comprehension more truly peripheral. For instance,
participants could perform a primary task such as
document editing while information is presented for
comprehension and recall on a display in another location
as done in several other studies [9,12].
Text-based Web Portal InfoCanvas
Best Recall Facilitator
2 (4%)
16 (33%)
31 (63%)
Worst Recall Facilitator 41 (84%)
5 (10%)
3 (6%)
Most Preferred
2 (4%)
35 (71%)
12 (25%)
Least Preferred
35 (71%)
2 (4%)
12 (25%)
Table 4: Rankings of displays for facilitating recall
and personal preference.
125
Acknowledgements This research has been supported in part by a grant from the National Science Foundation, IIS-0118685 and the first author's NDSEG Graduate Fellowship The authors would like to express gratitude to Richard Catrambone and Mary Czerwinski for providing valuable insights into the development and analysis of this study
| evaluation;peripheral display;graphical representation;awareness information;ambient display;text-based display;information conveyance;InfoCanvas display;Peripheral display;information recall;empirical evaluation;information visualization;Web portal-like display |
12 | A Geometric Constraint Library for 3D Graphical Applications | Recent computer technologies have enabled fast high-quality 3D graphics on personal computers, and also have made the development of 3D graphical applications easier. However , most of such technologies do not sufficiently support layout and behavior aspects of 3D graphics. Geometric constraints are, in general, a powerful tool for specifying layouts and behaviors of graphical objects, and have been applied to 2D graphical user interfaces and specialized 3D graphics packages. In this paper, we present Chorus3D, a geometric constraint library for 3D graphical applications. It enables programmers to use geometric constraints for various purposes such as geometric layout, constrained dragging, and inverse kinematics. Its novel feature is to handle scene graphs by processing coordinate transformations in geometric constraint satisfaction. We demonstrate the usefulness of Chorus3D by presenting sample constraint-based 3D graphical applications. | INTRODUCTION
Recent advances in commodity hardware have enabled fast
high-quality 3D graphics on personal computers. Also, software
technologies such as VRML and Java 3D have made the
development of 3D graphical applications easier. However,
most of such technologies mainly focus on rendering aspects
of 3D graphics, and do not sufficiently support layout and
behavior aspects.
Constraints are, in general, a powerful tool for specifying
layouts and behaviors of graphical objects.
It is widely
recognized that constraints facilitate describing geometric
layouts and behaviors of diagrams in 2D graphical user interfaces
such as drawing editors, and therefore constraint
solvers for this purpose have been extensively studied [3, 7,
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to distribute to lists, requires prior specific
permission and/or fee.
Int. Symp. on Smart Graphics,
June 11-13, 2002, Hawthorne, NY, USA.
Copyright 2002 ACM 1-58113-555-6/02/0600...
$
5.00
8, 9, 11, 12, 13, 17, 18]. Also, many specialized 3D graphics
packages enable the specification of object layouts and
behaviors by using constraints or similar functions.
It is natural to consider that various 3D graphical applications
can also be enhanced by incorporating constraints. It
might seem sufficient for this purpose to modify existing 2D
geometric constraint solvers to support 3D geometry. It is,
however, insufficient in reality because of the essential difference
between the ways of specifying 2D and 3D graphics;
typical 2D graphics handles only simple coordinate systems,
whereas most 3D graphics requires multiple coordinate systems
with complex relations such as rotations to treat scene
graphs. It means that we need to additionally support coordinate
transformations in 3D geometric constraint solvers.
In this paper, we present Chorus3D, a geometric constraint
library for 3D graphical applications. The novel feature of
Chorus3D is to handle scene graphs by processing coordinate
transformations in geometric constraint satisfaction.
We have realized Chorus3D by adding this feature to our
previous 2D geometric constraint library Chorus [13].
Another important point of Chorus3D is that it inherits from
Chorus the capability to handle "soft" constraints with hierarchical
strengths or preferences (i.e., constraint hierarchies
[7]), which are useful for specifying default layouts and behaviors
of graphical objects. It determines solutions so that
they satisfy as many strong constraints as possible, leaving
weaker inconsistent constraints unsatisfied.
Chorus3D also inherits from Chorus a module mechanism
which allows user-defined kinds of geometric constraints.
This feature enables programmers to use geometric constraints
for various purposes including the following:
Geometric layout: A typical use of Chorus3D is to lay
out graphical objects. For example, it allows putting
objects parallel or perpendicular to others without requiring
predetermined positioning parameters. Also, it
provides constraint-based general graph layout based
on the spring model [14].
Constrained dragging: Chorus3D enables dragging objects
with positioning constraints.
For example, it
can constrain a dragged object to be on the surface
of a sphere. Constrained dragging is important for 3D
graphics because it provides a sophisticated way to ac-94
commodate ordinary mouse dragging to 3D spaces.
Inverse kinematics: Chorus3D is applicable to inverse
kinematics, which is a problem of finding desired configurations
of "articulated" objects [1, 20]. It allows
the specification of articulated objects by using coordinate
transformations, and can automatically calculate
the parameters of the transformations that satisfy
constraints. This method is also applicable to camera
control by aiming at a possibly moving target object.
In this paper, we demonstrate the usefulness of Chorus3D
by presenting sample constraint-based 3D graphical applications
.
This paper is organized as follows: We first present our approach
to the use of constraints for 3D graphics. Second,
we describe our basic framework of constraints. Next, we
present a method for processing coordinate transformations
in our framework. We then provide the implementation of
Chorus3D, and demonstrate examples of using constraints
in 3D graphics. After giving related work and discussion, we
mention the conclusions and future work of this research.
OUR APPROACH
In this research, we integrate geometric constraints with 3D
graphics. Basically, we realize this by extending our previous
2D geometric constraint solver Chorus [13] to support
3D geometry. However, as already mentioned, it is not a
straightforward task because 3D graphics typically requires
handling scene graphs with hierarchical structures of coordinate
systems, which is not covered by the 2D version of
the Chorus constraint solver.
To support hierarchies of coordinate systems, we introduce
the following new model of constraints:
Point variables: Each point variable (which consists of
three real-valued constrainable variables) is associated
with one coordinate system, and its value is expressed
as local coordinates.
Geometric constraints: Geometric constraints on point
variables are evaluated by using the world coordinates
of the point variables (they can also refer to 1D variables
for, e.g., distances and angles by using their values
directly). A single constraint can refer to point
variables belonging to different coordinate systems.
Coordinate transformations: Parameters of coordinate
transformations are provided as constrainable variables
, and the solver is allowed to change the parameters
of transformations to appropriately satisfy given
constraints.
With this model, we can gain the benefit of the easy maintenance
of geometric relations by using constraints, as well as
the convenience of modeling geometric objects by employing
scene graphs.
In our actual implementation, we provide the following three
elemental kinds of coordinate transformations:
Translation: A translation transformation is characterized
with three variables t
x
, t
y
, and t
z
, and specifies the
translation of vector (t
x
, t
y
, t
z
).
Rotation: A rotation transformation is parameterized with
four variables r
x
, r
y
, r
z
, and r
w
, and specifies the rotation
of angle r
w
about the axis (r
x
, r
y
, r
z
).
Scale: A scale transformation is represented with three
variables s
x
, s
y
, and s
z
, and specifies the axis-wise
scale (s
x
, s
y
, s
z
) about the origin.
We can express many practically useful transformations by
using such elemental ones. In fact, any transformations represented
with Transform nodes in VRML can be realized by
combining these kinds of transformations [4].
CONSTRAINT FRAMEWORK
In this section, we briefly describe our framework for handling
constraints. We base it on the framework for the 2D
version of the Chorus constraint solver. See [13] for further
detail.
3.1
Problem Formulation
We first present the mathematical formulation for modeling
constraints and constraint systems.
In the following, we
write x to represent a variable vector (x
1
, x
2
, . . . , x
n
) of
n variables, and also v to indicate a variable value vector
(v
1
, v
2
, . . . , v
n
) of n real numbers (v
i
expresses the value of
x
i
).
To support various geometric constraints in a uniform manner
, we adopt error functions as a means of expressing constraints
. An error function e(x) is typically associated with
a single arithmetic constraint, and is defined as a function
from variable value vectors to errors expressed as non-negative
real numbers; that is, e(v) gives the error of the
associated constraint for v. An error function returns a zero
if and only if the constraint is exactly satisfied. For example,
e(x) = (x
i
- x
j
)
2
can be used for the constraint x
i
= x
j
.
We assume that, for each e(x), its gradient is known:
e(x) =
e(x)
x
1
, e(x)
x
2
, . . . , e(x)
x
n
.
In the same way as constraint hierarchies [7], constraint systems
in our framework can be divided into levels consisting
of constraints with equal strengths. Constraints with the
strongest preference are said to be required (or hard), and
are guaranteed to be always satisfied (if it is impossible,
there will be no solution). By contrast, constraints with
weaker preferences are said to be preferential (or soft), and
may be relaxed if they conflict with stronger constraints.
Solutions to constraint systems are defined as follows: let
e
i,j
(x) be the error function of the j-th constraint (1 j
m
i
) at strength level i (0 i l); then solutions v are
determined with the optimization problem
minimize
v
E(v) subject to e
0,j
(v) = 0 (1 j m
0
)
95
where E is an objective function defined as
E(x) =
l
i=1
m
i
j=1
w
i
e
i,j
(x)
in which w
i
indicates the weight associated with strength i,
and the relation w
1
w
2
w
l
holds. In this formulation
, level 0 corresponds to required constraints, and the
others to preferential ones. Intuitively, more weighted (or
stronger) preferential constraints should be more satisfied.
Our framework simulates constraint hierarchies. Particularly
, if the squares of constraint violations are used to compute
error functions, a system in our framework will obtain
approximate solutions to the similar hierarchy solved with
the criterion least-squares-better [3, 17]. The largest difference
is that a system in our framework slightly considers a
weak constraint inconsistent with a stronger satisfiable one
in computing its solutions, while the similar hierarchy would
discard such a weak one.
Our actual implementation of the Chorus3D constraint
solver provides four external strengths required, strong,
medium, and weak as well as two internal strengths very
strong (used to approximately handle required nonlinear
or inequality constraints) and very weak (exploited to make
new solutions as close to previous ones as possible). It typically
assigns weights 32
4
, 32
3
, 32
2
, 32
1
, and 1 to strengths
very strong, strong, medium, weak, and very weak respectively
. These weights were determined according to the precision
of the actual numerical algorithm (described in the
next subsection). To know how much these weights affect
solutions, suppose a system of strong constraint x = 0 and
medium one x = 100. Then the unique solution will be obtained
as x = 3.0303 (= 100/33). Thus the difference of
strengths is obvious. According to our actual experience,
this precision allows us to discriminate constraint strengths
in most graphical applications.
3.2
Algorithm
To actually find solutions to constraint systems presented
above, we need to solve their corresponding optimization
problems. For this purpose, we designed a constraint satisfaction
algorithm by combining a numerical optimization
technique with a genetic algorithm. It uses numerical optimization
to find local solutions, while it adopts a genetic
algorithm to search for global solutions.
For numerical optimization, we mainly use the quasi-Newton
method based on Broyden-Fletcher-Goldfarb-Sahnno updating
formula [2, 6], which is a fast iterative technique that
exhibits superlinear convergence.
Since it excludes fruitless
searches by utilizing its history, it is usually faster than
straightforward Newton's method.
We introduced a genetic algorithm to alleviate the problem
that some kinds of geometric constraints suffer from local optimal
but global non-optimal solutions [11, 16]. Generally,
a genetic algorithm is a stochastic search method that repeatedly
transforms a population of potential solutions into
another next-generation population [10, 15]. We typically
necessitate it only for computing initial solutions; in other
words, we can usually re-solve modified constraint systems
without the genetic algorithm, only by applying numerical
optimization to previous solutions.
PROCESSING COORDINATE TRANSFORMATIONS
In this section, we propose a method for integrating coordinate
transformations with our constraint framework.
As already mentioned, we use world coordinates of points
to evaluate 3D geometric constraints. A naive method for
this is to duplicate point variables in all ancestor coordinate
systems, and then to impose required constraints that represent
coordinate transformations between the point variables
. However, this method requires an optimization routine
supporting required nonlinear constraints, which limits
the availability of actual techniques (in fact, we cannot
use the quasi-Newton method for this purpose). Also, this
method tends to yield many variables and constraints, and
therefore requires an extra amount of memory.
Below we propose a more widely applicable method for handling
coordinate transformations.
Its characteristic is to
hide transformations from optimization routines, which is
realized by embedding transformations in error functions.
4.1
Model
To begin with, we introduce another variable vector x =
(x
1
, x
2
, . . . , x
n
), which is created by replacing variables for
local coordinates of 3D points in x with the corresponding
ones for world coordinates (1D variables remain the same).
We can mathematically model this process as follows: Consider
the sequence of the s transformations
y
0
(= x) t
0
- y
1
t
1
- t
s-2
- y
s-1
t
s-1
- y
s
(= x )
where y
0
and y
s
are equal to x and x respectively, each
y
k
(1 k s - 1) is an "intermediate" vector, and each t
k
(0 k s - 1) is a function that transforms y
k
into y
k+1
.
Intuitively, t
k
corresponds to a coordinate transformation,
and transforms related point variables from its source coordinate
system into its destination system. It should be
noted that, although transformations are, in general, hierarchical
(or tree-structured), we can always find such a linear
sequence by "serializing" them in an appropriate order.
By using such transformations, we can compute x as follows
:
x = t
s-1
(t
s-2
( (t
1
(t
0
(x))) )) t(x)
where t is defined as the composition of all the elemental
transformations. In the following description, we write y
k,i
to denote the i-th element of y
k
, and also t
k,i
to represent
the i-th element of t
k
; that is,
y
k+1
= (y
k+1,1
, y
k+1,2
, . . . , y
k+1,n
)
= (t
k,1
(y
k
), t
k,2
(y
k
), . . . , t
k,n
(y
k
)) = t
k
(y
k
).
4.2
Method
Geometric constraints are evaluated by using world coordinates
of points, which means that their error functions are
96
defined as e(x ). Using the composed transformations, we
can evaluate them as
e(x ) = e(t(x)).
Importantly, we can efficiently realize this computation by
applying only necessary transformations to actually used
variables.
We also need to compute the gradient of e(t(x)), i.e.,
e(t(x)) =
e(t(x))
x
1
, e(t(x))
x
2
, . . . , e(t(x))
x
n
.
Basically,
we
can
decompose
each
partial
derivative
e(t(x))/x
i
into primitive expressions by repeatedly using
the chain rule. However, we should avoid the simple
application of the chain rule since it would result in a large
number of expressions.
Instead, we perform a controlled way of decomposing such
partial derivatives; it appropriately arranges the chain rule
to restrict the computation to only necessary components.
First, we decompose e(t(x))/x
i
as follows:
e(t(x))
x
i
=
j
e(x )
x
j
t
s-1,j
(y
s-1
)
x
i
=
j
e(x )
x
j
j
s-1
t
s-1,j
(y
s-1
)
y
s-1,j
s-1
t
s-2,j
s-1
(y
s-2
)
x
i
=
j
s-1
j
e(x )
x
j
t
s-1,j
(y
s-1
)
y
s-1,j
s-1
t
s-2,j
s-1
(y
s-2
)
x
i
=
j
s-1
e(x )
y
s-1,j
s-1
t
s-2,j
s-1
(y
s-2
)
x
i
.
Note that each e(x )/x
j
is given by the definition
of the geometric constraint, and also that each
t
s-1,j
(y
s-1
)/y
s-1,j
s-1
is a partial derivative in the gradient
of a single coordinate transformation t
s-1
. Thus we
can obtain each e(x )/y
s-1,j
s-1
. Also, by repeating this
process, we can compute, for each k,
e(t(x))
x
i
=
j
k
e(x )
y
k,j
k
t
k-1,j
k
(y
k-1
)
x
i
and finally achieve
e(t(x))
x
i
=
j
1
e(x )
y
1,j
1
t
0,j
1
(x)
x
i
where each t
0,j
1
(x)/x
i
is a component of the gradient of
t
0
. Therefore, e(t(x))/x
i
is now determined.
Furthermore, we can considerably reduce the number of the
computations of e(x )/y
k,j
k
in practice. We can make the
following observations about the above computation:
For each variable x
j
, e(x )/x
j
can be non-zero only
if x
j
is actually needed to evaluate the designated constraint
.
If x
i
is originated in the coordinate system associated
with t
k
(that is, x
i
is either a local coordinate or a
parameter of the coordinate transformation), we have
y
k,i
= x
i
, which means that we have t
k,j
(y
k
)/x
i
.
Therefore, we can compute e(x )/x
i
immediately.
These observations reveal that we need to transfer a partial
derivative e(x )/y
k,j
to the next step only when x
j
represents
a really necessary coordinate that has not reached
its local coordinate system. Also, since we can handle each
necessary point independently, we can implement this process
with a linear recursive function that hands over only
three derivatives e(x )/y
k,j
at each recursive call.
IMPLEMENTATION
We implemented the proposed method by developing a constraint
solver called Chorus3D, which is a 3D extension to
our previous 2D geometric constraint solver Chorus [13]. We
constructed Chorus3D as a C++ class library, and also developed
a native method interface to make it available to
Java programs.
Chorus3D allows programmers to add a new kind of arithmetic
constraints (e.g., Euclidean geometric constraints) by
constructing a new constraint class with a method that evaluates
their error functions. Also, programmers can introduce
a new kind of non-arithmetic (or pseudo) constraints
(for, e.g., general graph layout) by developing a new evalua-tion
module which computes an "aggregate" error function
for a given set of constraints.
Chorus3D currently provides linear equality, linear inequality
, edit (update a variable value), stay (fix a variable value),
Euclidean geometric constraints (for, e.g., parallelism, per-pendicularity
, and distance equality), and graph layout constraints
based on the spring model [14]. Linear equality/
inequality constraints can refer to only 1D variables (including
elements of 3D point variables), while edit and stay constraints
can be associated with 1D and 3D point variables.
Euclidean geometric constraints typically refer to point variables
although they sometimes require 1D variables for angles
and distances. Each graph layout constraint represents
a graph edge, and refers to two point variables as its associated
graph nodes. As stated earlier, constraints on such
point variables are evaluated by using world coordinates of
the points. Also, a single constraint can refer to point variables
belonging to different coordinate systems.
The application programming interface of Chorus3D is a
natural extension to that of Chorus, which provides a certain
compatibility with a recent linear solver called Cassowary
[3]; in a similar way to Cassowary and Chorus, Chorus3D
allows programmers to process constraint systems by creating
variables and constraints as objects, and by adding/
removing constraint objects to/from the solver object. In
addition, Chorus3D handles coordinate transformations as
objects, and presents an interface for arranging them hier-archically
EXAMPLES
In this section, we present three examples to demonstrate
how to incorporate geometric constraints into 3D graphics
by using the Chorus3D constraint solver. All the examples
are implemented in Java by using Java 3D as a graphics
97
Figure 1: A 3D geometric layout of a general graph
structure.
programming interface as well as the native method interface
with Chorus3D. We also provide computation times taken
for constraint satisfaction in these examples.
6.1
Graph Layout
The first example is an application which lays out a set
of points with a general graph structure in a 3D space as
shown in Figure 1. This application also allows a user to
drag graph nodes with a mouse.
1
The used graph layout
technique is based on a 3D extension to the spring model
[14]. This kind of 3D graph layout is practically useful to
information visualization, and has actually been adopted in
a certain system [19].
The constraint system of this graph layout consists of 26
point variables (i.e., 78 real-valued variables), 31 graph layout
constraints, and three linear equality constraints for fixing
one of the point variables at the origin. When executed
on an 866 MHz Pentium III processor running Linux 2.2.16,
Chorus3D obtained an initial solution in 456 milliseconds. It
performed constraint satisfaction typically within 250 milliseconds
to reflect the user's dragging a graph node.
6.2
Constrained Dragging
The second example is an application which allows a user
to drag an object constrained to be on another spherical
object. Figure 2 depicts this application, where the smaller
solid spherical object is constrained to be on the surface of
the larger wireframe one. The application declares a strong
Euclidean geometric constraint which specifies a constant
distance between the centers of these objects. When the
user tries to drag the smaller object with a mouse, the application
imposes another medium Euclidean constraint which
collinearly locates the viewpoint, the 3D position of the
mouse cursor (which is considered to be on the screen), and
1
Unlike constrained dragging in the next example, this
mouse operation is simply implemented with Java 3D's
PickMouseBehavior classes.
Figure 2: Dragging an object constrained to be on
a sphere.
Viewpoint
Mouse cursor which
is on the screen
Screen
Object which is on
the sphere surface
Collinearity
constraint
Distance
constraint
Sphere
Figure 3: Implementation of constrained dragging.
the center of the dragged object as shown in Figure 3. This
collinearity constraint reflects the motion of the mouse in
the position of the dragged object. Since the collinearity
constraint is weaker than the first Euclidean constraint, the
user cannot drag the smaller object to the outside of the
larger sphere.
The application initially declares one Euclidean geometric
constraint on two point variables, and solved it in 1 millisecond
on the same computer as the first example. When
the user tries to drag the smaller object, it adds another
Euclidean constraint as well as two edit constraints for the
viewpoint and mouse position. The solver maintained this
constraint system usually within 2 milliseconds.
6.3
Inverse Kinematics
The final example applies inverse kinematics to a virtual
robot arm by using constraints.
Unlike the previous examples
, it takes advantage of coordinate transformations to
express its constraint system.
98
(a)
(b)
(c)
(d)
(e)
(f )
Figure 4: A robot arm application which performs inverse kinematics.
As illustrated in Figure 4(a), the robot arm consists of four
parts called a base, a shoulder, an upper arm, and a forearm.
Constraint satisfaction for inverse kinematics is performed
to position its hand (the end of the forearm) at the target
object if possible, or otherwise to make it maximally close
to the target. Figures 4(b)(f) show the movement of the
robot arm. In Figures 4(b)(e), its hand is positioned at
the exact location of the target by using appropriate angles
of its joints. By contrast, in Figure 4(f), the hand cannot
reach the target, and therefore the arm is extended toward
the target instead.
Figure 5 describes the constraint program used in the robot
arm application.
After constructing a constraint solver
s, it creates six coordinate transformations shldrTTfm,
shldrRTfm, uarmTTfm, uarmRTfm, farmTTfm, and farmRTfm.
Here the rotation angle parameters of the rotation transformations
shldrRTfm, uarmRTfm, and farmRTfm will actually
work as variables that can be altered by the solver.
Next, it generates a point variable handPos to represent
the position of the hand, and then suggests the target position
to the hand by using a preferential edit constraint
editHandPos. Finally, executing the solver, it obtains the
desired angles shldrAngle, uarmAngle, and farmAngle of
the rotation transformations. These angles will be passed
to the Java 3D library to render the properly configured
robot arm.
This program generates a constraint system which contains
three translation and three rotation transformations, one explicit
point variable as well as six point variables and three
1D variables for coordinate transformations, and one edit
constraint. The solver found an initial solution to this system
in 18 milliseconds, and obtained each new solution for
a frame update typically within 10 milliseconds.
RELATED WORK AND DISCUSSION
There has been work on integrating constraints or similar
functions with 3D graphics languages to facilitate the specification
of graphical objects. For example, we can view the
event routing mechanism in VRML [4] as a limited form of
one-way propagation constraints. Also, there is an attempt
to extend VRML by introducing one-way propagation and
finite-domain combinatorial constraints [5]. However, they
cannot handle more powerful simultaneous nonlinear constraints
such as Euclidean geometric constraints.
Although many constraint solvers have been developed in
99
// constraint solver
s = new C3Solver();
// translation transformation for the shoulder: fixed to (0, .1, 0)
shldrTTfm = new C3TranslateTransform(new C3Domain3D(0, .1, 0));
s.add(shldrTTfm); // shldrTTfm is parented by the world coordinate system
// rotation transformation for the shoulder: axis fixed to (0, 1, 0); angle ranging over [-10000, 10000]
shldrRTfm = new C3RotateTransform(new C3Domain3D(0, 1, 0), new C3Domain(-10000, 10000));
s.add(shldrRTfm, shldrTTfm); // shldrRTfm is parented by shldrTTfm
// translation transformation for the upper arm: fixed to (0, .1, 0)
uarmTTfm = new C3TranslateTransform(new C3Domain3D(0, .1, 0));
s.add(uarmTTfm, shldrRTfm); // uarmTTfm is parented by shldrRTfm
// rotation transformation for the upper arm: axis fixed to (0, 0, 1); angle ranging over [-1.57, 1.57]
uarmRTfm = new C3RotateTransform(new C3Domain3D(0, 0, 1), new C3Domain(-1.57, 1.57));
s.add(uarmRTfm, uarmTTfm); // uarmRTfm is parented by uarmTTfm
// translation transformation for the forearm: fixed to (0, .5, 0)
farmTTfm = new C3TranslateTransform(new C3Domain3D(0, .5, 0));
s.add(farmTTfm, uarmRTfm); // farmTTfm is parented by uarmRTfm
// rotation transformation for the forearm: axis fixed to (0, 0, 1); angle ranging over [-3.14, 0]
farmRTfm = new C3RotateTransform(new C3Domain3D(0, 0, 1), new C3Domain(-3.14, 0));
s.add(farmRTfm, farmTTfm); // farmRTfm is parented by farmTTfm
// variable for the hand's position, associated with farmRTfm and fixed to (0, .5, 0)
handPos = new C3Variable3D(farmRTfm, new C3Domain3D(0, .5, 0));
// medium-strength edit constraint for the hand's position
editHandPos = new C3EditConstraint(handPos, C3.MEDIUM);
s.add(editHandPos);
// suggest the hand being located at the target's position
editHandPos.set(getTargetWorldCoordinates());
// solve the constraint system
s.solve();
// get solutions
double shldrAngle = shldrRTfm.rotationAngle().value();
double uarmAngle = uarmRTfm.rotationAngle().value();
double farmAngle = farmRTfm.rotationAngle().value();
Figure 5: Constraint program for the robot arm application.
the field of graphical user interfaces [3, 7, 11, 12, 13, 17, 18],
most of them do not provide special treatment for 3D graphics
. In general, the role of nonlinear geometric constraints
is more important in 3D applications than in 2D interfaces.
Most importantly, 3D graphics usually requires rotations of
objects which are rarely used in 2D interfaces. The main
reason is that we often equally treat all "horizontal" directions
in a 3D space even if we may clearly distinguish them
from "vertical" directions. Therefore, nonlinear constraint
solvers are appropriate for 3D applications. In addition, coordinate
transformations should be supported since they are
typically used to handle rotations of objects.
Gleicher proposed the differential approach [8, 9], which supports
3D geometric constraints and coordinate transformations
. In a sense, it shares a motivation with Chorus3D; in
addition to support for 3D graphics, it allows user-defined
kinds of geometric constraints. However, it is based on a different
solution method from Chorus3D; it realizes constraint
satisfaction by running virtual dynamic simulations. This
difference results in a quite different behavior of solutions as
well as an interface for controlling solutions. By contrast,
Chorus3D provides a much more compatible interface with
recent successful solvers such as Cassowary [3].
Much research on inverse kinematics has been conducted in
the fields of computer graphics and robotics [1, 20]. However
, inverse kinematics is typically implemented as specialized
software which only provides limited kinds of geometric
constraints.
Chorus3D has two limitations in its algorithm: one is on the
precision of solutions determined by preferential constraints;
the other is on the speed of the satisfaction of large constraint
systems. These limitations are mainly caused by the
treatment of multi-level preferences of constraints in addition
to required constraints (i.e., constraint hierarchies). Although
many numerical optimization techniques have been
proposed and implemented in the field of mathematical programming
[2, 6], most of them do not handle preferential
constraints. To alleviate the limitations of Chorus3D, we
are pursuing a more sophisticated method for processing
multi-level preferential constraints.
We implemented Chorus3D as a class library which can
be exploited in C++ and Java programs. However, more
high-level authoring tools will also be useful for declarative
approaches to 3D design. One possible direction is to extend
VRML [4] to support geometric constraints. Standard
VRML requires scripts in Java or JavaScript to realize complex
layouts and behaviors. By contrast, constraint-enabled
VRML will cover a wider range of applications without such
additional scripts.
CONCLUSIONS AND FUTURE WORK
In this paper, we presented Chorus3D, a geometric constraint
library for 3D graphical applications. It enables programmers
to use geometric constraints for various purposes
100
such as geometric layout, constrained dragging, and inverse
kinematics.
Its novel feature is to handle scene graphs
by processing coordinate transformations in geometric constraint
satisfaction.
Our future work includes the development of other kinds of
geometric constraints to further prove the usefulness of our
approach. In particular, we are planning to implement non-overlapping
constraints [13] in Chorus3D so that we can use
it for the collision resolution of graphical objects. Another
future direction is to improve Chorus3D in the scalability
and accuracy of constraint satisfaction.
REFERENCES
[1] Badler, N. I., Phillips, C. B., and Webber, B. L.
Simulating Humans: Computer Graphics, Animation,
and Control. Oxford University Press, Oxford, 1993.
[2] Bertsekas, D. P. Nonlinear Programming, 2nd ed.
Athena Scientific, 1999.
[3] Borning, A., Marriott, K., Stuckey, P., and Xiao, Y.
Solving linear arithmetic constraints for user interface
applications. In Proc. ACM UIST , 1997, 8796.
[4] Carey, R., Bell, G., and Marrin, C. The Virtual
Reality Modeling Language (VRML97). ISO/IEC
14772-1:1997, The VRML Consortium Inc., 1997.
[5] Diehl, S., and Keller, J. VRML with constraints. In
Proc. Web3D-VRML, ACM, 2000, 8186.
[6] Fletcher, R. Practical Methods of Optimization,
2nd ed. John Wiley & Sons, 1987.
[7] Freeman-Benson, B. N., Maloney, J., and Borning, A.
An incremental constraint solver. Commun. ACM 33,
1 (1990), 5463.
[8] Gleicher, M. A graphical toolkit based on differential
constraints. In Proc. ACM UIST , 1993, 109120.
[9] Gleicher, M. A differential approach to graphical
manipulation (Ph.D. thesis). Tech. Rep.
CMU-CS-94-217, Sch. Comput. Sci. Carnegie Mellon
Univ., 1994.
[10] Herrera, F., Lozano, M., and Verdegay, J. L. Tackling
real-coded genetic algorithms: Operators and tools for
behavioural analysis. Artif. Intell. Rev. 12, 4 (1998),
265319.
[11] Heydon, A., and Nelson, G. The Juno-2
constraint-based drawing editor. Research Report
131a, Digital Systems Research Center, 1994.
[12] Hosobe, H. A scalable linear constraint solver for user
interface construction. In Principles and Practice of
Constraint Programming--CP2000 , vol. 1894 of
LNCS, Springer, 2000, 218232.
[13] Hosobe, H. A modular geometric constraint solver for
user interface applications. In Proc. ACM UIST , 2001,
91100.
[14] Kamada, T., and Kawai, S. An algorithm for drawing
general undirected graphs. Inf. Process. Lett. 31, 1
(1989), 715.
[15] Kitano, H., Ed. Genetic Algorithms. Sangyo-Tosho,
1993. In Japanese.
[16] Kramer, G. A. A geometric constraint engine. Artif.
Intell. 58, 13 (1992), 327360.
[17] Marriott, K., Chok, S. S., and Finlay, A. A tableau
based constraint solving toolkit for interactive
graphical applications. In Principles and Practice of
Constraint Programming--CP98 , vol. 1520 of LNCS,
Springer, 1998, 340354.
[18] Sannella, M. Skyblue: A multi-way local propagation
constraint solver for user interface construction. In
Proc. ACM UIST , 1994, 137146.
[19] Takahashi, S. Visualizing constraints in visualization
rules. In Proc. CP2000 Workshop on Analysis and
Visualization of Constraint Programs and Solvers,
2000.
[20] Zhao, J., and Badler, N. I. Inverse kinematics
positioning using nonlinear programming for highly
articulated figures. ACM Trans. Gr. 13, 4 (1994),
313336.
101
| layout;scene graphs;3D graphics;geometric layout;constraint satisfaction;3D graphical applications;geometric constraints;graphical objects;behaviors;coordinate transformation |
120 | KDDCS: A Load-Balanced In-Network Data-Centric Storage Scheme for Sensor Networks | We propose an In-Network Data-Centric Storage (INDCS) scheme for answering ad-hoc queries in sensor networks. Previously proposed In-Network Storage (INS) schemes suffered from Storage Hot-Spots that are formed if either the sensors' locations are not uniformly distributed over the coverage area, or the distribution of sensor readings is not uniform over the range of possible reading values. Our K-D tree based Data-Centric Storage (KDDCS) scheme maintains the invariant that the storage of events is distributed reasonably uniformly among the sensors. KDDCS is composed of a set of distributed algorithms whose running time is within a poly-log factor of the diameter of the network. The number of messages any sensor has to send, as well as the bits in those messages, is poly-logarithmic in the number of sensors. Load balancing in KDDCS is based on defining and distributively solving a theoretical problem that we call the Weighted Split Median problem . In addition to analytical bounds on KDDCS individual algorithms , we provide experimental evidence of our scheme's general efficiency, as well as its ability to avoid the formation of storage hot-spots of various sizes, unlike all previous INDCS schemes. | INTRODUCTION
Sensor networks provide us with the means of effectively monitoring
and interacting with the physical world. As an illustrative
example of the type of sensor network application that concerns
us here, consider an emergency/disaster scenario where sensors are
deployed in the area of the disaster [17]. It is the responsibility of
the sensor network to sense and store events of potential interest.
An event is composed of one or more attributes (e.g. temperature,
carbon monoxide level, etc.), the identity of the sensor that sensed
the event, and the time when the event was sensed. As first responders
move through the disaster area with hand-held devices, they
issue queries about recent events in the network. For example, the
first responder might ask for the location of all sensor nodes that
recorded high carbon monoxide levels in the last 15 minutes, or
he might ask whether any sensor node detected movement in the
last minute. Queries are picked up by sensors in the region of the
first responder. The sensor network is then responsible for answering
these queries. The first responders use these query answers to
make decisions on how to best manage the emergency.
The ad-hoc queries of the first responders will generally be multi-dimensional
range queries [9], that is, the queries concern sensor
readings that were sensed over a small time window in the near past
and that fall in a given range of the attribute values. In-Network
Storage (INS) is a storage technique that has been specifically presented
to efficiently process this type of queries. INS involves storing
events locally in the sensor nodes. Storage may be in-network
because it is more efficient than shipping all the data (i.e., raw sensor
readings) out of the network (for example to base stations), or
simply because no out-of-network storage is available. All INS
schemes already presented in literature were Data-Centric Storage
(DCS) schemes [15]. In any In-Network Data-Centric Storage (INDCS
) scheme, there exists a function from events to sensors that
maps each event to an owner sensor based on the value of the attributes
of that event. The owner sensor will be responsible for
storing this event. The owner may be different than the sensor that
originally generated the event. To date, the Distributed Index for
Multi-dimensional data (DIM) scheme [9] has been shown to exhibit
the best performance among all proposed INDCS schemes in
dealing with sensor networks whose query loads are basically composed
of ad-hoc queries .
In DIM [9], the events-to-sensors mapping is based on a K-D tree
[3], where the leaves
R form a partition of the coverage area, and
each element of
R contains either zero or one sensor. The formation
of the K-D tree consists of rounds. Initially,
R is a one element
set containing the whole coverage area. In each odd/even round r,
each region R R that contains more than one sensor is bisected
horizontally/vertically. Each time that a region is split, each sensor
in that region has a bit appended to its address specifying which
side of the split the sensor was on. Thus, the length of a sensor's
address (bit-code) is its depth in the underlying K-D tree. When a
sensor generates an event, it maps such event to a binary code based
on a repetitive fixed uniform splitting of the attributes' ranges in a
round robin fashion. For our purposes, it is sufficient for now to
317
consider the cases that the event consists of only one attribute, say
temperature. Then, the high order bits of the temperature are used
to determine a root-to-leaf path in the K-D tree, and if there is a
sensor in the region of the leaf, then this sensor is the owner of this
event. Due to the regularity of regions in this K-D tree, the routing
of an event from the generating sensor to the owner sensor is particularly
easy using Greedy Perimeter Stateless Routing (GPSR) [6].
Full description of DIM is presented in Section 2.
Though it is the best DCS scheme so far, DIM suffers from several
problems. One problem is that events may well be mapped to
orphan regions that contain no sensors. Thus, DIM requires some
kludge to assign orphan regions to neighboring sensors.
Another major problem in DIM is that of storage hot-spots. Storage
hot-spots may occur if the sensors are not uniformly distributed.
A storage hot-spot occurs when relatively many events are assigned
to a relatively small number of the sensors. For example, if there
was only one sensor on one side of the first bisection, then half of
the events would be mapped to this sensor if the events were uniformly
distributed over the range of possible events. Due to their
storage constraints, the presence of a storage hot-spot leads to increasing
the dropping rate of events by overloaded sensors. Clearly,
this has a significant impact on the quality of data (QoD) generated
by the sensor network. Queries for events in a storage hot-spot may
be delayed due to contention at the storage sensors and the surrounding
sensors. More critically, the sensors in and near the hot-spot
may quickly run out of energy, due to the high insertion/query
load imposed to them. This results in a loss of the events generated
at these sensors, the events stored at these sensors, and possibly a
decrease in network connectivity. Increased death of sensors results
in decreasing the coverage area and causes the formation of coverage
gaps within such area. Both of which consequently decrease
QoD. Certainly, it is not desirable to have a storage scheme whose
performance and QoD guarantees rest on the assumption that the
sensors are uniformly distributed geographically.
Storage hot-spots may also occur in DIM if the distribution of
events is not uniform over the range of possible events. It is difficult
to imagine any reasonable scenario where the events are uniformly
distributed over the range of all possible events. Consider the situation
where the only attribute sensed is temperature. One would
expect that most temperature readings would be clustered within a
relatively small range rather than uniform over all possible temperatures
. Without any load balancing, those sensors responsible for
temperatures outside this range would store no events.
In this paper, we provide a load-balanced INDCS scheme based
on K-D trees, that we, not surprisingly, call K-D tree based DCS
(KDDCS). In our KDDCS scheme, the refinement of regions in the
formation of the K-D tree has the property that the numbers of sensors
on both sides of the partition are approximately equal. As a
result of this, our K-D tree will be balanced, there will be no orphan
regions, and, regardless of the geographic distribution of the
sensors, the ownership of events will uniformly distributed over
the sensors if the events are uniformly distributed over the range
of possible events. We present a modification of GPSR routing,
namely Logical Stateless Routing (LSR), for the routing of events
from their generating sensors to their owner sensors, that is competitive
with the GPSR routing used in DIM. In order to maintain
load balance in the likely situation that the events are not uniformly
distributed, we present a re-balancing algorithm that we call K-D
Tree Re-balancing (KDTR). Our re-balancing algorithm guarantees
load balance even if the event distribution is not uniform. KDTR
has essentially minimal overhead. We identify a problem, that we
call the weighted split median problem, that is at the heart of both
the construction of the initial K-D tree, and the re-balancing of the
K-D tree. In the weighted split median problem, each sensor has an
associated weight/multiplicity, and the sensors' goal is to distributively
determine a vertical line with the property that the aggregate
weight on each side of the line is approximately equal. We give a
distributed algorithm for the weighted split median problem, and
show how to use this algorithm to construct our initial K-D tree,
and to re-balance the tree throughout the network lifetime.
We are mindful of the time, message complexity, and node storage
requirements, in the design and implementation of all of our
algorithms. The time for all of our algorithms is within a poly-log
factor of the diameter of the network. Obviously, no algorithm can
have time complexity less than the diameter of the network. The
number of messages, and number of bits in those messages, that
any particular node is required to send by our algorithms is poly-logarithmic
in number of sensors. The amount of information that
each node must store to implement one of our algorithms is logarithmic
in the number of sensors.
Experimental evaluation shows that the main advantages of KDDCS
, when compared to the pure DIM, are:
Achieving a better data persistence by balancing the storage
responsibility among sensor nodes.
Increasing the QoD by distributing the storage hot-spot events
among a larger number of sensors.
Increasing the energy savings by achieving a well balanced
energy consumption overhead among sensor nodes.
The rest of the paper is organized as follows. Section 2 presents
an overview of the differences between DIM and KDDCS. Section
3 describes the weighted split median problem, and our distributed
solution. Section 4 describes the components of KDDCS. Section 5
presents our K-D tree re-balancing algorithm. Experimental results
are discussed in Section 6. Section 7 presents the related work.
OVERVIEW OF DIM VS KDDCS
In this section, we will briefly describe the components of both
schemes, DIM and KDDCS, and highlight the differences between
the two schemes using a simple example.
We assume that the sensors are arbitrarily deployed in the convex
bounded region R. We assume also that each sensor is able to
determine its geographic location (i.e., its x and y coordinates), as
well as, the boundaries of the service area R. Each node is assumed
to have a unique NodeID, like a MAC address. Sensor nodes are
assumed to have the capacity for wireless communication, basic
processing and storage, and they are associated with the standard
energy limitations.
The main components of any DCS scheme are: the sensor to
address mapping that gives a logical address to each sensor, and
the event to owner-sensor mapping that determines which sensor
will store the event. The components of DIM and KDDCS are:
Repetitive splitting of the geographic region to form the underlying
K-D tree, and the logical sensor addresses.
Repetitive splitting of the attribute ranges to form the bit-code
for an event.
The routing scheme to route an event from the generating
sensor to the owner sensor.
We now explain how DIM implements these components.
Let us start with the formation of the K-D tree in DIM. DIM
starts the network operation with a static node to bit-code mapping
phase. In such phase, each sensor locally determines its binary
address by uniformly splitting the overall service area in a round
318
Figure 1: Initial network configuration
Figure 2: DIM K-D tree
robin fashion, horizontally then vertically, and left shifting its bit-code
with every split by 0 (or 1) bit when falling above (or below)
the horizontal split line (similarly, by a 0 bit if falling on the left
of the vertical split line, or a 1 bit otherwise). Considering the
region as partitioned into zones, the process ends when every sensor
lies by itself in a zone, such that the sensor address is the zone bit
code. Thus, the length of the binary address of each sensor (in bits)
represents its depth in the underlying K-D tree. Note that from
a sensor address, one can determine the physical location of the
sensor. In case any orphan zones exist (zones physically containing
no sensors in their geographic area), the ownership of each of these
zones is delegated to one of its neighbor sensors. As an example,
consider the simple input shown in Figure 1. The K-D tree formed
by DIM is shown in Figure 2. In this figure, the orphan zone (01)
is assumed to be delegated to node 001, which is the least loaded
among its neighbors.
We now turn to the construction of an event bit-code in DIM.
The generation of the event bit-code proceeds in rounds. As we
proceed, there is a range R
j
associated with each attribute j of the
event. Initially, the range R
j
is the full range of possible values for
attribute j. We now describe how a round i 0 works. Round i,
determines the (i+1)
th
high order bit in the code. Round i depends
on attribute j = i mod k of the event, where k is the number of
attributes in the event. Assume the current value of R
j
is [a, c], and
let b = (a + c)/2 be the midpoint of the range R
j
. If the value of
attribute j is in the lower half of the range R
j
, that is in [a, b], then
the i
th
bit is 0, and R
j
is set to be the lower half of R
j
. If the value
of attribute j is in the upper half of the range R
j
, that is in [b, c],
then the i
th
bit is 1, and R
j
is set to be the upper half of R
j
.
To show the events to bit-code mapping in DIM, consider that
the events in our example (shown in Figure 2) are composed of
two attributes, temperature and pressure, with ranges (30, 70) and
(0, 2), respectively. Let an event with values (55, 0.6) be generated
by Node N3(11). The 4 high-order bits of the bit-code for this
event are 1001. This is because temperature is in the top half of
the range [30, 70], pressure is in the bottom half of the range [0, 2],
then temperature is in the bottom half of the range [50, 70], and
pressure is in the top half of the range [0, 1]. Thus, the event should
be routed toward the geometric location specified by code 1001.
In DIM, an event is routed using Greedy Perimeter Stateless
Routing (GPSR) [6] to the geographic zone with an address matching
the high order bits of the event bit-code. In our example, the
sensor 10 will store this event since this is the sensor that matches
Figure 3: KDDCS K-D tree
the high order bits of the bit-code 1001. If there is no sensor in this
region, then, the event is stored in a neighboring region.
We now highlight the differences between our proposed KDDCS
scheme, and DIM. The first difference is how the splitting is accomplished
during the formation of the K-D tree. In KDDCS, the split
line is chosen so that there are equal numbers of sensors on each
side of the split line. Recall that, in DIM, the split line was the geometric
bisector of the region. Thus, in KDDCS, the address of a
sensor is a logical address and does not directly specify the location
of the sensor. Also, note that the K-D tree in KDDCS will be balanced
, while this will not be the case in DIM if the sensors are not
uniformly distributed. This difference is illustrated by the K-D tree
formed by KDDCS shown in Figure 3 for the same simple input
shown in Figure 1. The second difference is that in determining the
owner sensor for an event, the range split point b need not be the
midpoint of the range R
j
. The value of b is selected to balance the
number of events in the ranges [a, b] and [b, c]. Thus, in KDDCS,
the storage of events will be roughly uniform over the sensors. The
third difference is that, since addresses are not geographic, KDDCS
needs a routing scheme that is more sophisticated than GPSR.
THE WEIGHTED SPLIT MEDIAN PROBLEM
Before presenting our KDDCS scheme, we first define the weighted
split median problem in the context of sensor networks and present
an efficient distributed algorithm to solve the problem. Each sensor
s
i
initially knows w
i
associated values v
1
, . . . v
w
i
. Let W =
P
n
i=1
w
i
be the number of values. The goal for the sensors is to
come to agreement on a split value V with the property that approximately
half of the values are larger than V and half of the values
are smaller than V .
We present a distributed algorithm to solve this problem. The
time complexity of our algorithm is O(log n) times the diameter of
the communication network in general, and O(1) times the diameter
if n is known a priori within a constant factor. Each node is
required to send only O(log n) sensor ID's. The top level steps of
this algorithm are:
1. Elect a leader sensor s , and form a breadth first search (BFS)
tree T of the communication network that is rooted at s .
2. The number of sensors n, and the aggregate number of values
W is reported to s .
3. The leader s collects a logarithmically-sized uniform random
sample L of the values. The expected number of times
that a value from sensor s
i
is included in this sample is
"
w
i
log n
W
"
.
4. The value of V is then the median of the reported values in
L, which s reports to all of the sensors.
We need to explain how these steps are accomplished, and why the
algorithm is correct.
319
We start with the first step. We assume that each sensor has a
lower bound k on the number of sensors in R. If a sensor has no
idea of the number of other sensors, it may take k = 2.
Then, each sensor decides independently, with probability `
ln k
k
,
to become a candidate for the leader. Each candidate sensor s
c
initiates
the construction of a BFS tree of the communication graph
rooted at s
c
by sending a message Construct(s
c
)
to its neighbors.
Assume a sensor s
i
gets a message Construct(s
c
) from sensor s
j
.
If this is the first Construct(s
c
) message that it has received, and
s
c
's ID is larger than the ID of any previous candidates in prior
Construct messages, then:
s
i
makes s
j
its tentative parent in the BFS tree T , and
forwards the Construct(s
c
) message to its neighbors.
If the number of candidates was positive, then, after time proportional
to the diameter of the communication network, there will
be a BFS tree T rooted at the candidate with the largest ID. Each
sensor may estimate an upper bound for the diameter of the communication
graph to be the diameter of R divided by the broadcast
radius of a sensor. After this time, the sensors know that they have
reached an agreement on T , or that there were no candidates. If
there were no candidates, each sensor can double its estimate of
k, and repeat this process. After O(log n) rounds, it will be the
case that k = (n). Once k = (n), then, with high probability
(that is, with probability 1
1
poly(n)
), the number of candidates
is (log n). Thus, the expected time complexity to accomplish
the first step is O(n log n). Assuming that each ID has O(log n)
bits, the expected number of bits that each sensors has to send is
O(log
2
n) since there are are likely only O(log n) candidates on
the first and only round in which there is a candidate. A log n factor
can be removed if each sensor initially knows an estimate of n
that is accurate to within a multiplicative constant factor.
The rest of the steps will be accomplished by waves of root-to-leaves
and leaves-to-root messages in T . The second step is easily
accomplished by a leave-to-root wave of messages reporting on the
number of sensors and number of values in each subtree. Let T
i
be
the subtree of T rooted at sensor s
i
, and W
i
the aggregate number
of values in T
i
. The value W
i
that s
i
reports to its parents is w
i
plus the aggregate values reported to s
i
by its children in T . The
sensor count that s
i
reports to its parents is one plus the sensor
counts reported to s
i
by its children in T .
The third step is also accomplished by a root-to-leaves wave and
then a leaves-to-root wave of messages. Assume a sensor s
i
wants
to generate a uniform random sample of L
i
of the values stored
in the sensors in T
i
. The value of L for the leader is (log n).
Let s
i
1
, . . . , s
i
d
be the children of s
i
in T . Node s
i
generates the
results to L
i
Bernoulli trials, where each trial has d + 1 outcomes
corresponding to s
i
and its d children. The probability that the
outcome of a trial is s
i
is
w
i
W
i
, and the probability that the outcome
is the child s
i
j
is
w
ij
W
i
. Then, s
i
informs each child s
i
j
how often it
was selected, which becomes the value of L
i
j
s
i
, then waits until it
receives samples back from all of its children. s
i
then unions these
samples, plus a sample of values of the desired size from itself, and
then passes that sample back to its parent. Thus, each sensor has to
send O(log n) ID's.
The leader s then sets V to be the median of the values of the
sample L, then, in a root-to-leaves message wave, informs the other
sensors of the value of V .
We now argue that, with high probability, the computed median
of the values is close to the true median. Consider a value ^
V such
that only a fraction <
1
2
of the values are less than ^
V . One
can think of each sampled value as being a Bernoulli trial with outcomes
less and more depending on whether the sampled value is
Figure 4: Logical address assignment algorithm
less than ^
V . The number of less outcomes is binomially distributed
with mean L. In order for the computed median to be less than
^
V , one needs the number of less outcomes to be at least L/2, or
equivalently (
1
2
-)L more than the mean L. But the probability
that a binomially distributed variable exceeds its mean by a factor
of 1 + is at most e
-2
3
. Thus, by picking the multiplicative constant
in the sample size to be sufficiently large (as a function of ),
one can guarantee that, with high probability, the number of values
less than the computed median V cannot be much more than L/2.
A similar argument shows that the number more than the computed
median V can not be much more than L/2.
If the leader finds that n is small in step 2, it may simply ask all
sensors to report on their identities and locations, and then compute
V directly.
Now that we solved the weighted split median problem, we present
the components of the KDDCS scheme in the next section.
KDDCS
We now present our KDDCS scheme in details. We explain how
the initial K-D tree is constructed, how events are mapped to sensors
, and how events are routed to their owner sensors.
4.1
Distributed Logical Address Assignment
Algorithm
The main idea of the algorithm is that the split lines used to construct
the K-D tree are selected so that each of the two resulting
regions contain an equal number of sensors. The split line can be
determined using our weighted split median algorithm with each
sensor having unit weight, and the value for each sensor is either
its x coordinate or its y coordinate. The recursive steps of the algorithm
are shown in Figure 4. We now describe in some greater
detail how a recursive step works.
The algorithm starts by partitioning the complete region R horizontally
. Thus, the distributed weighted split median algorithm
(presented in section 3) is applied for R using the y-coordinates of
the sensors to be sent to the BFS root. Upon determining weighted
split median of R, sensors having lower y-coordinate than the median
value (we refer to these sensors as those falling in the lower
region of R) assign their logical address to 0. On the other hand,
those sensor falling on the upper region of R assign themselves a 1
logical address. At the end of the first recursive step, the terrain can
be looked at as split into two equally logically loaded partitions (in
terms of the number of sensors falling in each partition).
At the next step, the weighted split median algorithm is applied
locally in each of the sub-regions (lower/upper), while using the
sensors' x-coordinates, thus, partitioning the sub-regions vertically
rather than horizontally. Similarly, sensors' logical addresses are
updated by left-shifting them with a 0 bit for those sensors falling
320
in the lower regions (in other words, sensor nodes falling on the
left of the weighted median line), or with a 1 bit for sensor nodes
falling in the upper regions (i.e., sensor nodes falling on the right
of the weighted median line).
The algorithm continues to be applied distributively by the different
subtrees until each sensor obtains a unique logical address,
using x and y coordinates of sensors, in a round robin fashion, as
the criterion of the split. The algorithm is applied in parallel on
the different subtrees whose root nodes fall at the same tree level.
At the i
th
recursive step, the algorithm is applied at all intermediate
nodes falling at level i- 1 of the tree. Based on the definition of the
weighted split median problem, the algorithm results in forming a
balanced binary tree, such that sensors represent leaf nodes of this
tree (intermediate nodes of the tree are logical nodes, not physical
sensors). The algorithm terminates in log n recursive steps. At the
end of the algorithm, the size of the logical address given to each
sensor will be log n bits.
Recall that the time complexity of our weight split median algorithm
is O(d log n), where d is the diameter of the region. Thus,
as the depth of our K-D tree is O(log n), we get that the time complexity
for building the tree is O(d log
2
n). If the sensors are uniformly
distributed, then, as the construction algorithm recurses, the
diameters of the regions will be geometrically decreasing. Thus,
in the case of uniformly distributed sensors, one would expect the
tree construction to take time O(d log n). As our weighted split
median algorithm requires each sensor to send O(log n) ID's, and
our K-D tree has depth O(log n), we can conclude that during the
construction of our K-D tree, the number of ID's sent by any node
is O(log
2
n).
4.2
Event to Bit-code Mapping
In this section, we explain how the event to bit-code mapping
function is determined. Recall that the main idea is to set the split
points of the ranges so that the storage of events is roughly uniform
among sensor nodes. To construct this mapping requires a probability
distribution on the events. In some situations, this distribution
might be known. For example, if the network has been operational
for some period of time, a sampling of prior events might be used
to estimate a distribution. In cases where it is not known, say when
a network is first deployed, we can temporarily assume a uniform
distribution.
In both cases, we use the balanced binary tree as the base tree
to overlay the attribute-specific K-D tree on (Recall that a K-D tree
is formed by k attributes). This is basically done by assigning a
range for each of the k attributes to every intermediate node in the
tree. Note that the non-leaf nodes in the K-D tree are logical nodes
that do not correspond to any particular sensor. One may think of
non-leaf nodes as regions. Any split point p of a node x of tree
level l, where l%k = i, represents a value of attribute i. Such split
point partitions the range of attribute i falling under responsibility
of node x into two subranges such that the the subrange lower than
p is assigned to the left child of x, while the other range is assigned
to x's right child. Note that the other k - 1 ranges of node x,
corresponding to the remaining k-1 attributes, are simply inherited
by both children of x.
Knowing the data distribution, the split points of the tree should
be predefined in a way to cope with any expected irregularity in
the load distribution among the K-D tree leaf nodes. For example,
given an initial temperature range (30, 70) and knowing that 50%
of the events will fall in the temperature range (65, 70), the root
split point should be defined as 65 (assuming that the temperature
is the first attribute in the event). Therefore, based on the selected
root split point, the left child subtree of the root will be responsible
of storing events falling in the temperature range (30, 65),
while the right child subtree will store events falling in the range
(65, 70). Figure 3 gives an example of non-uniform initialization
of split points.
We finish by describing what information is stored in each sensor
node. Each sensor node corresponds to a leaf in the K-D tree. Each
sensor knows its logical address in the tree. Further, each leaf in
the K-D tree knows all the pertinent information about each of its
ancestors in the tree. The pertinent information about each node is:
The geographic region covered.
The split line separating its two children.
The attribute range, and attribute split point, associated with
this region.
From this information, each leaf/sensor can determine the range of
events that will be stored at this sensor. Note that each sensor only
stores O(log n) information about the K-D tree.
4.3
Incremental Event Hashing and Routing
Strictly speaking, the events-to-sensors mapping in DIM actually
produces a geographic location. GPSR routing can then be used to
route that event towards that geographic location. If the destination
is contained in a leaf region with one sensor, then that sensor stores
the event. If the leaf region is an orphan, then one of the sensors in
the neighboring regions will store this event.
In our scheme, the events-to-sensors mapping provides a logical
address. Essentially, all that the sensor generating the event can
determine from this logical address is a general direction of the
owner sensor. Thus, our routing protocol, which we call Logical
Stateless Routing (LSR), is in some sense less direct.
LSR operates in O(log n) rounds. We explain how a round
works. Assume that a source sensor with a logical address s wants
to route an event e to a sensor with logical address t. However,
s does not know the identity of the sensor t. Recall that s knows
the pertinent information about its ancestors in the K-D tree. In
particular, s knows the range split values of its ancestors. Thus, s
can compute the least common ancestor (LCA) of s and t in the
K-D tree. Assume that the first bit of disagreement between s and
t is the
th
bit. So, the least common ancestor (LCA) of s and t
in the K-D tree has depth . Let R be the region corresponding to
the LCA of s and t, L the split line corresponding to this region,
and R
0
and R
1
the two subregions of R formed by L. Without
loss of generality, assume that s R
0
and t R
1
. From its own
address, and the address of t, the sensor s can conclude that t is in
the region R
1
. Recall that s knows the location of the split line L.
The sensor s computes a location x in the region R
1
. For concrete-ness
here, let us assume that x is some point in R
1
that lies on the
line intersecting s and perpendicular to L (Although there might be
some advantages to selecting x to be the geometric center of the region
R
1
). LSR then directs a message toward the location x using
GPSR. The message contains an additional field noting that this is
a
th
round message. The
th
round terminates when this message
first reaches a sensor s whose address agrees with the address of t
in the first + 1 bits. The sensor s will be the first sensor reached
in R
1
. Round + 1 then starts with s being the new source sensor.
We explain how range queries are routed by means of an example
. This example also essentially illustrates how events are stored.
Figure 5 gives an example of a multi-dimensional range query and
shows how to route it to its final destination. In this example, a
multi-dimensional range query arises at node N 7(111) asking for
the number of events falling in the temperature range (30, 32) and
pressure range (0.4, 1) that were generated throughout the last 2
minutes. Node N 7 knows that the range split point for the root
321
Figure 5: Example of routing a query on KDDCS
was temperature 40, and thus, this query needs to be routed toward
the left subtree of the root, or geometrically toward the top
of the region, using GPSR. The first node in this region that this
event reaches is say N 3. Node N 3 knows that the first relevant
split point is pressure = 0.5. Thus, the query is partitioned into two
sub-queries, ((30, 32), (0.4, 0.5)) and ((30, 32), (0.5, 1)). When
processing the first subquery, node N 3 forwards it to the left using
GPSR. N 3 can then tell that the second query should be routed to
the other side of its parent in the K-D tree since the range split for
its parent is temperature 34. The logical routing of this query is
shown on the right in Figure 5, and a possible physical routing of
this query is shown on the left in Figure 5.
As LSR does not initially know the geometric location of the
owner sensor, the route to the owner sensor cannot possibly be as
direct as it is in DIM. But, we argue that the length of the route in
LSR should be at most twice the length of the route in DIM. Assume
for the moment that all messages are routed by GPSR along
the direct geometric line between the source sensor and the destination
location. Let us assume, without loss of generality, that LSR is
routing horizontally in the odd rounds. Then, the routes used in the
odd rounds do not cross any vertical line more than once. Hence,
the sum of the route distances used by LSR in the odd rounds is
at most the diameter of the region. Similarly, the sum of the route
distances used by LSR in the even rounds is at most the diameter of
the region. Thus, the sum of the route distances for LSR, over all
rounds, is at most twice the diameter. The geometric distance between
the source-destination pair in DIM is obviously at most the
diameter. So we can conclude that the length of the route found by
LSR is at most twice the length of the route found by DIM, assuming
that GPSR is perfect. In fact, the only assumption that we need
about GPSR to reach this conclusion is that the length of the path
found by GPSR is roughly a constant multiple times the geometric
distance between the source and destination. Even this factor
of two can probably be improved significantly in expectation if the
locations of the sensors are roughly uniform. A simple heuristic
would be to make the location of the target x equal to the location
of the destination sensor t if the sensors in R
1
where uniformly distributed
. The location of x can easily be calculated by the source
sensor s given information local to s.
KDTR K-D TREE RE-BALANCING ALGORITHM
Based on the KDDCS components presented so far, KDDCS
avoids the formation of storage hot-spots resulting from skewed
sensor deployments, and from skewed events distribution if the distribution
of events was known a priori. However, storage hot-spots
may be formed if the initial presumed events distribution was not
correct, or if events distribution evolves over times. We present a
K-D tree re-balancing algorithm, KDTR, to re-balance the load.
In the next subsections, we first explain how to determine the
roots of the subtrees that will re-balance, and then show how a re-balancing
operation on a subtree works. We assume that this re-balancing
is performed periodically with a fixed period.
5.1
Selection of Subtrees to be Re-Balanced
The main idea is to find the highest unbalanced node in the KD
tree. A node is unbalanced if the ratio of the number of events
in one of the child subtrees over the number of events stored in
the other child subtree exceeds some threshold h. This process of
identifying nodes to re-balance proceeds in O(log n) rounds from
the leaves to the root of the K-D tree.
We now describe how round i 1 works. Intuitively, round i
occurs in parallel on all subtrees rooted at nodes of height i + 1
in the K-D tree. Let x be a node of height i + 1. Let the region
associated with x be R, the split line be L, and the two subregions
of R be R
0
and R
1
. At the start of this round, each sensor in R
0
and R
1
knows the number of stored events C
0
and C
1
in R
0
and
R
1
, respectively. The count C
0
is then flooded to the sensors in
R
1
, and the count C
1
is flooded to the sensors in R
0
. After this
flooding, each sensor in R knows the number of events stored in R,
and also knows whether the ratio max(
C
0
C
1
,
C
1
C
0
)
exceeds h.
The time complexity per round is linear in the diameter of a region
considered in that round. Thus, the total time complexity is
O(D log n), where D is the diameter of the network, as there are
O(log n) rounds. The number of messages sent per node i in a
round is O(d
i
)
, where d
i
is the degree of node i in the communication
network. Thus, the total number of messages sent by a node
i is O(d
i
log n).
Re-Balancing is then performed in parallel on all unbalanced
nodes, that have no unbalanced ancestors. Note that every leaf
knows if an ancestor will re-balance, and is so, the identity of the
unique ancestor that will balance. All the leaves of a node that will
re-balance, will be aware of this at the same time.
5.2
Tree Re-balancing Algorithm
Let x be an internal node to the K-D tree that needs to be re-balanced
. Let the region associated with x be R. Let the attribute
associated with node x be the j'th attribute. So, we need to find a
new attribute split L for the j'th attribute for node x. To accomplish
this, we apply the weighted split median procedure, where the
weight w
i
associated with sensor i is the number of events stored
at sensor i, and the values are the j'th attributes of the w
i
events
stored at that sensor. Thus, the computed attribute split L has the
property that, in expectation, half of the events stored in R have
their j'th attribute larger than L, and half of the events stored in R
have their j'th attribute smaller than L.
Let R
0
and R
1
be the two subregions of R. Eventually, we want
to recursively apply this process in parallel to the regions R
0
and
R
1
. But before recursing, we need to route some events from one
of R
0
or R
1
to the other. The direction of the routing depends on
whether the attribute split value became larger or smaller. Let us
assume, without loss of generality, that events need to be routed
from R
0
to R
1
. Consider an event e stored at a sensor s in R
0
that
needs to be routed to R
1
. The sensor s picks a destination logical
address t, uniformly at random, from the possible addresses in the
region R
1
. The event e is then routed to t using the routing scheme
described in section 4.3. The final owner for e in R
1
cannot be
determined until our process is recursively applied to R
1
, but this
process cannot be recursively applied until the events that should
be stored in R
1
are contained in R
1
. The fact the the destination
addresses in R
1
were picked uniformly at random ensures load balance
.
This process can now be recursively applied to R
0
and R
1
.
322
Figure 6: KDDCS original K-D tree
We now discuss the complexity of this procedure. We break the
complexity into two parts: the cost of performing the weighted split
median operation, and the cost of migrating the events. One application
of the weighted split median has time complexity O(D log n),
where D is the diameter of the region, and messages sent per node
of O(log
2
n) messages. Thus, we get time complexity O(D log
2
n)
and messages sent per node of O(log
3
n) for all of the applications
of weighted split median. Every period re-balance requires each
event to travel at most twice the diameter of the network (assuming
that GPSR routes on a direct line). The total number of events that
can be forced to migrate as a result of k new events being stored
is O(k log k). Thus, the amortized number of migrations per event
is logarithmic, O(log k) in the number of insertions. This amount
of re-balancing per insertion is required for any standard dynamic
data structure (e.g. 2-3 trees, AVL trees, etc.).
Figures 6 and 7 show a detailed example illustrating how KDTR
works. Continuing on the same example we presented in Section
4.2, we monitor how KDTR maintains the K-D tree balancing in
the course of successive insertions. Starting with an equal number
of 3 events stored at each sensor, a storage hot-spot arises in node
N 7 after 6 event insertions. By checking the ratio of N 7 storage
to that of N 7, KDTR identifies the subtree rooted at node 11 as
an unbalanced subtree. As none of node 11's ancestors is unbalanced
at this point, KDTR selects 11 to be re-balanced. However,
the storage load remains skewed toward subtree 1, thus, after another
6 insertions, KDTR re-balances the subtree rooted at 1. After
12
more insertions aiming the right subtree of the root, KDTR re-balances
the root of the tree, basically changing the attribute-based
split points of almost all internal nodes, in order to maintain the balance
of the tree. Note that, as long as the average loads of sensors
which are falling outside the hot-spot area increases, the frequency
of re-balancing decreases.
We digress slightly to explain a method that one might use to
trigger re-balancing, as opposed to fixed time period re-balancing.
Each sensor s
i
knows the number of events that are stored in each
region corresponding to an ancestor of s
i
in the K-D tree when this
region was re-balanced. Let C
j
be the number of events at the last
re-balancing of the region R
j
corresponding to node of depth j on
the path from the root to s
i
in the K-D tree. Assume that the region
R
j
has elected a leader s
j
. Then, the number of events that have
to be stored in R
j
, since the last re-balancing, to cause another re-balancing
in R
j
is something like hC
j
, where h is the unbalancing
ratio that we are willing to tolerate. Then, each insertion to s
i
is
reported by s
i
to s
j
with probability something like
"
log n
hC
j
"
.
Thus, after seeing (log n) such notifications, the leader s
j
can be
confident that there have been very close to hC
j
insertions into the
region R
j
, and a re-balancing might be warranted. Note that the
role of leader requires only receiving O(log n) messages.
Figure 7: KDTR example
EXPERIMENTAL RESULTS
In order to evaluate our KDDCS scheme, we compared its performance
with that of the DIM scheme, that has been shown to be
the best among current INDCS schemes [9].
In our simulation, we assumed having sensors of limited buffer
and constrained energy. We simulated networks of sizes ranging
from 50 to 500 sensors, each having an initial energy of 50 units,
a radio range of 40m, and a storage capacity of 10 units. For simplicity
, we assumed that the size of a message is equal to the size
of a storage unit. We also assumed that the size of a storage unit
is equal to the size of an event. When sent from a sensor to its
neighbor, a message consumes 1 energy unit from the sender energy
and 0.5 energy unit from the receiver energy. The service area
was computed such that each node has on average 20 nodes within
its nominal radio range.
As each sensor has a limited storage capacity, it is assumed to
follow a FIFO storage approach to handle its cache. Thus, a sensor
replaces the oldest event in its memory by the newly incoming
event to be stored in case it is already full when receiving this new
event.
We modeled a network of temperature sensors. The range of possible
reading values was [30, 70]. We modeled storage hot-spots by
using a random uniform distribution to represent sensors' locations,
while using a skewed distribution of events among the attributes
ranges. Note that the regular sensor deployment assumption does
not affect our ability to assess the effectiveness of KDDCS as the
storage hot-spot can result from either skewed sensor deployments,
or skewed data distributions, or both. The storage hot-spot size is
characterized by the skewness dimensions, which are the percentage
of the storage hot-spot events to the total number of events
generated by the sensor network and the percentage of the read-323
0
200
400
600
800
1000
1200
1400
1600
1800
2000
50
100
150
200
250
300
350
400
450
500
Dropped Events
Network Size
DIM
KDDCS/KDTR
Figure 8: Number of dropped events for networks with a 80%-10%
hot-spot
0
200
400
600
800
1000
1200
1400
1600
1800
2000
50
100
150
200
250
300
350
400
450
500
Dropped Events
Network Size
DIM
KDDCS/KDTR
Figure 9: Number of dropped events for networks with a 80%-5%
hot-spot
ings' range in which the hot-spot events fall to the total possible
range of temperature readings. We assumed that a single storage
hot-spot is imposed on the sensor network. To follow the behavior
of KDDCS toward storage hot-spots of various sizes, we simulated,
for each network size, a series of hot-spots where a percentage of
10
% to 80% of the events fell into a percentage of 5% to 10% of
the reading' range. Note that we always use the term x%-y% hot-spot
to describe a storage hot-spot where x% of the total generated
events fall into y% of the readings' range.
We used a uniform split points initialization to setup the attribute
range responsibilities of all internal nodes of the K-D tree. For the
re-balancing threshold, we used a value of 3 to determine that a
specific subtree is unbalanced. Node failures were handled in the
same way as DIM. When a node fails, its stored events are considered
lost. Futher events directed to the range responsibility of such
node are directed to one of its close neighbors.
We ran the simulation for each network size and storage hot-spot
size pair. Each simulation run consisted of two phases: the insertion
phase and the query phase. During the insertion phase, each
sensor generates (i.e. reads) 5 events, according to the predefined
hot-spot size and distribution, and forward each of these event to
its owner sensor. In the query phase, each sensor generates queries
of sizes ranging from 10% to 90% of the [30, 70] range. The query
phase is meant to measure the damages, in terms of QoD and energy
losses, caused by the storage hot-spot.
The results of the simulations are shown in the Figures 8 to 17. In
these figures, we compare the performance of our KDDCS scheme
versus that the DIM scheme with respect to various performance
measures. Note that we only show some of our findings due to
space constraints.
R1. Data Persistence: Figures 8 and 9 present the total number
events dropped by all network nodes in networks with 80%-10%
and 80%-5% hot-spots, respectively. By analyzing the difference
0
200
400
600
800
1000
1200
1400
50
100
150
200
250
300
350
400
450
500
Events Returned for (0.5 * attribute range) Query
Network Size
DIM
KDDCS/KDTR
Figure 10: Query size of a 50% query for networks with a 80%-10%
hot-spot
0
200
400
600
800
1000
1200
1400
50
100
150
200
250
300
350
400
450
500
Events Returned for (0.8 * attribute range) Query
Network Size
DIM
KDDCS/KDTR
Figure 11: Query size of a 80% query for networks with a 80%-5%
hot-spot
between KDDCS and DIM, we can find out that the number of
dropped events in the first is around 40% to 60% of that in the
second. This can be interpreted by the fact that KDDCS achieves
a better load balancing of storage among the sensors. This leads to
decreasing the number of sensors reaching their maximum storage,
and decreasing the total number of such nodes compared to that in
the pure DIM. This directly results in decreasing the total number
of dropped events and achieving a better data persistence.
Another important remark to be noted based on the two figures is
that decreasing the size of the hot-spot by making the same number
of events to fall into a smaller attributes' range does not highly
affect the overall performance of KDDCS compared to that of DIM.
R2. Quality of Data: Figures 10 and 11 show the average query
sizes of 50% and 80% of the attribute ranges for networks with
a 80%-10% and 80%-5% hot-spots, respectively. It is clear that
KDDCS remarkebly improves the QoD provided by the sensor network
. This is mainly due to dropping less information (as pointed
at in R1), thus, increasing the number of events resulting in each
query. The gap between DIM and KDDCS, in terms of resulting
query sizes, is really huge for in both graphs, which indicates that
KDDCS outperforms DIM for different storage hot-spot sizes.
This result has a very important implication on the data accuracy
of the sensor readings output from a network experiencing a
hot-spot. The success of the KDDCS in avoiding hot-spots results
in improving the network ability to keep a higher portion of the
hot-spot data. This ameliorates the degree of correctness of any aggregate
functions on the network readings, for example, an average
of the temperature or pressure values where a high percentage of
the data is falling within a small range of the total attributes' range.
We consider this to be a good achievement compared to the pure
DIM scheme.
R3. Load Balancing: Figures 12 and 13 show the average node
storage level for networks with 70%-10% and 60%-5% hot-spots,
324
0
1
2
3
4
5
50
100
150
200
250
300
350
400
450
500
Average Storage Level
Network Size
DIM
KDDCS/KDTR
Figure 12: Average node storage level for networks with a 70%-10%
hot-spot (numbers rounded to ceiling integer)
0
1
2
3
4
5
50
100
150
200
250
300
350
400
450
500
Average Storage Level
Network Size
DIM
KDDCS/KDTR
Figure 13: Average node storage level for networks with a 60%-5%
hot-spot (numbers rounded to ceiling integer)
respectively. By a node storage level, we mean the number of
events stored in the node's cache. The figures show that KDDCS
has a higher average storage level than DIM, especially for less
skewed hot-spots. This can be interpreted as follows. When a storage
hot-spot arises in DIM, the hot-spot load is directed to a small
number of sensors. These nodes rapidly reach their storage maximum
, while almost all other sensor nodes are nearly empty. Therefore
, the load distribution is highly skewed among nodes leadind to
a low average storage level value. However, in KDDCS, the number
of nodes effectively storing events increases. Subsequently, the
average storage load value increases. This gives us a truthful figure
about the better storage balancing the network. It is worth mentioning
that each of the values in both figures is rounded to the ceiling
integer. Thus, in both cases, the average in DIM does not exceed
one event per sensor for all network sizes.
R4. Energy Consumption Balancing: Figures 14 and 15 show
the average node energy level at the end of the simulation for networks
with 70%-10% and 50%-5% hot-spots, respectively. The
figures show that this average generally decreases with the increase
30
35
40
45
50
50
100
150
200
250
300
350
400
450
500
Average Energy Level
Network Size
DIM
KDDCS/KDTR
Figure 14: Average sensors' energy levels for networks with a
70%-10% hot-spot
30
35
40
45
50
50
100
150
200
250
300
350
400
450
500
Average Energy Level
Network Size
DIM
KDDCS/KDTR
Figure 15: Average sensors' energy levels for networks with a
50%-5% hot-spot
0
500
1000
1500
2000
2500
3000
3500
50
100
150
200
250
300
350
400
450
500
Moved Events for x%-10% hot-spot
Network Size
x=40
x=60
x=80
Figure 16: Number of event movements for networks with a
x%-10% hot-spot
of the network size for both schemes. The interesting result that
these figures show is that both KDDCS and DIM result in fairly
close average energy consumption among the sensors. However,
as we mentioned in R3 and based on the way DIM works, most of
the energy consumed in DIM is effectively consumed by a small
number of nodes, namely those falling in the hot-spot region. On
the other hand, the number of nodes consuming energy increases in
KDDCS due to the better load balancing KDDCS achieves, while
the average energy consumed by each sensor node decreases. Thus,
although the overall energy consumption is the same in both KDDCS
and DIM, this result is considered as a positive result in terms
of increasing the overall network lifetime, as well as avoiding the
early death of sensor nodes, which leads to avoid network partitioning
.
R5. Events Movements: Figures 16 and 17 show the number of
migrated events for networks with x% - 10% and x% - 5% hot-spots
, respectively, where x varies from 40 to 80. For both sets of
hot-spot sizes, the number of event movements lineraly increases
with the network size. The important result to be noted in both
0
500
1000
1500
2000
2500
3000
3500
50
100
150
200
250
300
350
400
450
500
Moved Events for x%-5% hot-spot
Network Size
x=40
x=60
x=80
Figure 17: Number of event movements for networks with a
x%-5% hot-spot
325
figure is that the total number of movements is not highly dependent
on the hot-spot size. This is mainly because KDDCS avoids
storage hot-spots in their early stages instead of waiting for a large
storage hot-spot to be formed, and then try to decompose it. Therefore
, most of the event movements are really done at the start of
the formation of the storage hot-spot. This leads to the fact that,
for highly skewed data distributions, (i.e. large hot-spot sizes), the
number of event movements does not highly change with changing
the exact storage hot-spot size.
RELATED WORK
Many approaches have been presented in literature defining how
to store the data generated by a sensor network. In the early age of
sensor networks research, the main storage trend used consisted of
sending all the data to be stored in base stations, which lie within,
or outside, the network. However, this approach may be more appropriate
to answer continuous queries, which are queries running
on servers and mostly processing events generated by all the sensor
nodes over a large period of time [4, 10, 18, 14, 12, 11].
In order to improve the lifetime of the sensor network, as well
as the QoD of ad-hoc queries, INS techniques have been proposed.
All INS schemes presented so far were based on the idea of DCS
[15]. These INDCS schemes differ from each other based on the
events-to-sensors mapping used. The mapping was done using hash
tables in DHT [15] and GHT [13], or using K-D trees in DIM [9].
The formation of storage hot-spots due to irregularity, in terms of
sensor deployment or events distribution, represent a vital issue in
current INDCS techniques [5]. Some possible solutions for irregular
sensors deployments were highlighted by [5], such as routing
based on virtual coordinates, or using heuristics to locally adapt to
irregular sensor densities. Recently, some load balancing heuristics
for the irregular events distribution problem were presented
by [2, 8]. Such techniques were limited in their capability to deal
with storage hot-spots of large sizes as they were basically acting
like storage hot-spots detection and decomposition schemes, rather
than storage hot-spots avoidance schemes like KDDCS. To the best
of our knowledge, no techniques have been provided to cope with
both types of irregularities at the same time. A complentary work
to our paper is that on exploting similarities in processing queries
issued by neighboring sensors in a DCS scheme [16].
Query Hot-Spots is another important problem that is orthogonal
to the storage hot-spots problem. The problem arizes when a large
percentage of queries ask for data stored in few sensors. We identified
the problem in an earlier paper [1] and presented two algorithms
, Zone Partitioning (ZP) and Zone Partial Replication (ZPR),
to locally detect and decompose query hot-spots in DIM. We believe
that KDDCS is able to cope with query hot-spots provided
minor changes are added to the scheme. We aim at addressing this
problem in the KDDCS testbed that we plan to develop.
Recently, Krishnamurthy et al. [7] presented a novel DCS scheme,
called RESTORE, that is characterized by real time event correlation
. It would be interesting to compare the performance of both
KDDCS and RESTORE in terms of load balacing.
CONCLUSIONS
Sensor databases are becoming embedded in every aspect of our
life from merchandise tracking, healthcare, to disaster responds. In
the particular application of disaster management, it has been ar-gued
that it is more energy efficient to store the sensed data locally
in the sensor nodes rather than shipping it out of the network, even
if out-of-network storage is available.
The formation of Storage Hot-Spots is a major problem with the
current INDCS techniques in sensor networks. In this paper, we
presented a new load-balanced INDCS scheme, namely KDDCS,
that avoids the formation of storage hot-spots arising in the sensor
network due to irregular sensor deployment and/or irregular events
distribution. Further, we proposed a new routing algorithm called
Logical Stateless Routing, for routing events from the generating
sensors to the storage sensors, that is competitive with the popular
GPSR routing. Our experimental evaluation has confirmed that our
proposed KDDCS both increases the quality of data and the energy
savings by distributing events of the storage hot-spots among
a larger number of sensors.
Acknowledgments
We would like to thank Mohamed Sharaf for his useful feedback.
We would also like to thank the anonymous referees for their helpful
comments.
REFERENCES
[1] M. Aly, P. K. Chrysanthis, and K. Pruhs. Decomposing data-centric
storage query hot-spots in sensor networks. In Proc. of
MOBIQUITOUS, 2006.
[2] M. Aly, N. Morsillo, P. K. Chrysanthis, and K. Pruhs. Zone Sharing:
A hot-spots decomposition scheme for data-centric storage in sensor
networks. In Proc. of DMSN, 2005.
[3] J. L. Bentley. Multidimensional binary search trees used for
associative searching. In CACM, 18(9), 1975.
[4] P. Bonnet, J. Gehrke, and P. Seshadri. Towards sensor database
systems. In Proc. of MDM, 2001.
[5] D. Ganesan, S. Ratnasamy, H. Wang, and D. Estrin. Coping with
irregular spatio-temporal sampling in sensor networks. In Proc. of
HotNets-II, 2003.
[6] B. Karp and H. T. Kung. GPSR: Greedy perimeter stateless routing
for wireless sensor networks. In Proc. of ACM Mobicom, 2000.
[7] S. Krishnamurthy, T. He, G. Zhou, J. A. Stankovic, and S. H. Son.
Restore: A real-time event correlation and storage service for sensor
networks. In Proc. of INSS, 2006.
[8] X. Li, F. Bian, R. Govidan, and W. Hong. Rebalancing distributed
data storage in sensor networks. Technical Report No. 05-852, CSD,
USC, 2005.
[9] X. Li, Y. J. Kim, R. Govidan, and W. Hong. Multi-dimensional range
queries in sensor networks. In Proc. of ACM SenSys, 2003.
[10] S. Madden, M. Franklin, J. Hellerstein, and W. Hong. TAG: a tiny
aggregation service for ad-hoc sensor networks. In Proc. of OSDI,
2002.
[11] S.-J. Park, R. Vedantham, R. Sivakumar, and I. F. Akyildiz. A
scalable approach for reliable downstream data delivery in wireless
sensor networks. In Proc. of MobiHoc, 2004.
[12] T. Pham, E. J. Kim, and W. M. Moh. On data aggregation quality and
energy efficiency of wireless sensor network protocols. In Proc. of
BROADNETS, 2004.
[13] S. Ratnasamy, B. Karp, L. Yin, F. Yu, D. Estrin, R. Govidan, and
S. Shenker. GHT: A grographic hash table for data-centric storage. In
Proc. of WSNA, 2002.
[14] M. A. Sharaf, J. Beaver, A. Labrinidis, and P. K. Chrysanthis. TiNA:
A scheme for temporal coherency-aware in-network aggregation. In
Proc. of MobiDE, 2003.
[15] S. Shenker, S. Ratnasamy, B. Karp, R. Govidan, and D. Estrin.
Data-centric storage in sensornets. In Proc. of HotNets-I, 2002.
[16] P. Xia, P. K. Chrysanthis, and A. Labrinidis. Similarity-aware query
processing in sensor networks. In Proc. of WPDRTS, 2006.
[17] T. Yan, T. He, and J. A. Stankovic. Differentiated surveillance for
sensor networks. In Proc. of SenSys, 2003.
[18] Y. Yao and J. Gehrke. Query processing for sensor networks. In Proc.
of CIDR, 2003.
326
| quality of data (QoD);KDDCS;routing algorithm;Power-Aware;energy saving;Sensor Network;sensor network;Distributed Algorithms;weighted split median problem;DIM;data persistence;storage hot-spots;ad-hoc queries |
121 | Language-specific Models in Multilingual Topic Tracking | Topic tracking is complicated when the stories in the stream occur in multiple languages. Typically, researchers have trained only English topic models because the training stories have been provided in English. In tracking, non-English test stories are then machine translated into English to compare them with the topic models. We propose a native language hypothesis stating that comparisons would be more effective in the original language of the story. We first test and support the hypothesis for story link detection. For topic tracking the hypothesis implies that it should be preferable to build separate language-specific topic models for each language in the stream. We compare different methods of incrementally building such native language topic models. | INTRODUCTION
Topic detection and tracking (TDT) is a research area concerned
with organizing a multilingual stream of news broadcasts as it arrives
over time. TDT investigations sponsored by the U.S. government
include five different tasks: story link detection, clustering
(topic detection), topic tracking, new event (first story) detection
, and story segmentation. The present research focuses on
topic tracking, which is similar to filtering in information retrieval
. Topics are defined by a small number of (training) stories,
typically one to four, and the task is to find all the stories on those
topics in the incoming stream.
TDT evaluations have included stories in multiple languages since
1999. TDT2 contained stories in English and Mandarin. TDT3
and TDT4 included English, Mandarin, and Arabic. Machine-translations
into English for all non-English stories were provided
, allowing participants to ignore issues of story translation.
All TDT tasks have at their core a comparison of two text models.
In story link detection, the simplest case, the comparison is between
pairs of stories, to decide whether given pairs of stories are
on the same topic or not. In topic tracking, the comparison is between
a story and a topic, which is often represented as a centroid
of story vectors, or as a language model covering several stories.
Our focus in this research was to explore the best ways to compare
stories and topics when stories are in multiple languages. We
began with the hypothesis that if two stories originated in the
same language, it would be best to compare them in that language,
rather than translating them both into another language for comparison
. This simple assertion, which we call the native language
hypothesis, is easily tested in the TDT story link detection task.
The picture gets more complex in a task like topic tracking, which
begins with a small number of training stories (in English) to define
each topic. New stories from a stream must be placed into
these topics. The streamed stories originate in different languages,
but are also available in English translation. The translations have
been performed automatically by machine translation algorithms,
and are inferior to manual translations. At the beginning of the
stream, native language comparisons cannot be performed because
there are no native language topic models (other than English
). However, later in the stream, once non-English documents
have been seen, one can base subsequent tracking on native-language
comparisons, by adaptively training models for additional
languages. There are many ways this adaptation could be performed
, and we suspect that it is crucial for the first few non-English
stories to be placed into topics correctly, to avoid building
non-English models from off-topic stories.
Previous research in multilingual TDT has not attempted to compare
the building of multiple language-specific models with single
-language topic models, or to obtain native-language models
through adaptation. The focus of most multilingual work in TDT
for example
[2]
[12]
[13], has been to compare the efficacy of machine
translation of test stories into a base language, with other
means of translation. Although these researchers normalize scores
for the source language, all story comparisons are done within the
base language. This is also true in multilingual filtering, which is
a similar task
[14].
The present research is an exploration of the native language hypothesis
for multilingual topic tracking. We first present results on
story link detection, to support the native language hypothesis in a
simple, understandable task. Then we present experiments that
test the hypothesis in the topic tracking task. Finally we consider
several different ways to adapt topic models to allow native language
comparisons downstream.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
SIGIR '04, July 25-29, 2003, Sheffield, South Yorkshire, UK.
Copyright 2004 ACM 1-58113-881-4/04/0007...$5.00.
402
Although these experiments were carried out in service of TDT,
the results should equally apply to other domains which require
the comparison of documents in different languages, particularly
filtering, text classification and clustering.
EXPERIMENTAL SETUP
Experiments are replicated with two different data sets, TDT3 and
TDT4, and two very different similarity functions - cosine similarity
, and another based on relevance modeling, described in the
following two sections. Cosine similarity can be seen as a basic
default approach, which performs adequately, and relevance modeling
is a state of the art approach which yields top-rated performance
. Confirming the native-language hypothesis in both systems
would show its generality.
In the rest of this section, we describe the TDT data sets, then we
describe how story link detection and topic tracking are carried
out in cosine similarity and relevance modeling systems. Next, we
describe the multilingual aspects of the systems.
2.1
TDT3 Data
TDT data consist of a stream of news in multiple languages and
from different media - audio from television, radio, and web news
broadcasts, and text from newswires. Two forms of transcription
are available for the audio stream. The first form comes from
automatic speech recognition and includes transcription errors
made by such systems. The second form is a manual transcription,
which has few if any errors. The audio stream can also be divided
into stories automatically or manually (so-called reference
boundaries). For all the research reported here, we used manual
transcriptions and reference boundaries.
The characteristics of the TDT3 data sets for story link detection
and topic tracking are summarized in Tables 1-3.
Table 1: Number of stories in TDT3 Corpus
English Arabic Mandarin
Total
TDT3 37,526 15,928 13,657 67,111
Table 2: Characteristics of TDT3 story link detection data sets
Number of topics
8
Number of link pairs
Same topic
Different topic
English-English
605
3999
Arabic-Arabic
669
3998
Mandarin-Mandarin
440
4000
English-Arabic
676
4000
English-Mandarin
569
4000
Arabic-Mandarin
583
3998
Total
3542
23,995
Table 3: Characteristics of TDT3 topic tracking data sets
N
t
=2
N
t
=4
Number of topics
36 30
Num. test stories
On-topic
All On-topic
All
English
2042
883,887
2042
796,373
Arabic
572
372,889
572
336,563
Mandarin
405
329,481
369
301,568
Total
3019 1,593,782
2983 1,434,504
2.2
Story Representation and Similarity
2.2.1
Cosine similarity
To compare two stories for link detection, or a story with a topic
model for tracking, each story is represented as a vector of terms
with tfidf term weights:
(
)
(
)
(
)
1
log
5
.
0
log
+
+
=
N
df
N
tf
a
i
(1)
where tf is the number of occurrences of the term in the story, N is
the total number of documents in the collection, and df is the
number of documents containing the term. Collection statistics N
and df are computed incrementally, based on the documents already
in the stream within a deferral period after the test story
arrives. The deferral period was 10 for link detection and 1 for
topic tracking. For link detection, story vectors were pruned to the
1000 terms with the highest term weights.
The similarity of two (weighted, pruned) vectors
n
a
a
a
,...,
1
=
r
and
m
b
b
b
,...,
1
=
r
is the inner product between the two vectors:
(
)
(
)(
)
=
i
i
i
i
i
i
i
b
a
b
a
Sim
2
2
cos
(2)
If the similarity of two stories exceeds a yes/no threshold, the
stories are considered to be about the same topic.
For topic tracking, a topic model is a centroid, an average of the
vectors for the N
t
training stories. Topic models are pruned to 100
terms based on the term weights. Story vectors pruned to 100
terms are compared to centroids using equation (2). If the similarity
exceeds a yes/no threshold, the story is considered on-topic.
2.2.2
Relevance modeling
Relevance modeling is a statistical technique for estimating language
models from extremely small samples, such as queries,
[9].
If Q is small sample of text, and C is a large collection of documents
, the language model for Q is estimated as:
)
|
(
)
|
(
)
|
(
Q
M
P
M
w
P
Q
w
P
d
C
d
d
=
(3)
A relevance model, then, is a mixture of language models M
d
of
every document d in the collection, where the document models
are weighted by the posterior probability of producing the query
P(M
d
|Q). The posterior probability is computed as:
=
C
d
Q
q
d
Q
q
d
d
M
q
P
d
P
M
q
P
d
P
Q
M
P
)
|
(
)
(
)
|
(
)
(
)
|
(
(4)
Equation (4) assigns the highest weights to documents that are
most likely to have generated Q, and can be interpreted as nearest-neighbor
smoothing, or a massive query expansion technique.
To apply relevance modeling to story link detection, we estimate
the similarity between two stories A and B by pruning the stories
to short queries, estimating relevance models for the queries, and
measuring the similarity between the two relevance models. Each
story is replaced by a query consisting of the ten words in the
query with the lowest probability of occurring by chance in ran-domly
drawing |A| words from the collection C:
403
=
A
C
A
A
C
C
A
C
A
P
w
w
w
w
w
chance
)
(
(5)
where |A| is the length of the story A, A
w
is the number of times
word w occurs in A, |C| is the size of the collection, and C
w
is the
number of times word w occurs in C.
Story relevance models are estimated using equation (4). Similarity
between relevance models is measured using the symmetrized
clarity-adjusted divergence
[11]:
+
=
w
A
B
w
B
A
RM
GE
w
P
Q
w
P
Q
w
P
GE
w
P
Q
w
P
Q
w
P
Sim
)
|
(
)
|
(
log
)
|
(
)
|
(
)
|
(
log
)
|
(
(6)
where P(w|Q
A
) is the relevance model estimated for story A, and
P(w|GE) is the background (General English, Arabic, or Mandarin
) probability of w, computed from the entire collection of stories
in the language within the same deferral period used for cosine
similarity.
To apply relevance modeling to topic tracking, the asymmetric
clarity adjusted divergence is used:
=
w
track
GE
w
P
S
w
P
T
w
P
S
T
Sim
)
|
(
)
|
(
log
)
|
(
)
,
(
(7)
where P(w|T) is a relevance model of the topic T. Because of
computational constraints, smoothed maximum likelihood estimates
rather than relevance models are used for the story model
P(w|S). The topic model, based on Equation (3), is:
=
t
S
d
d
t
M
w
P
S
T
w
P
)
|
(
1
)
|
(
(8)
where S
t
is the set of training stories. The topic model is pruned to
100 terms. More detail about applying relevance models to TDT
can be found in
[2].
2.3
Evaluation
TDT tasks are evaluated as detection tasks. For each test trial, the
system attempts to make a yes/no decision. In story link detection,
the decision is whether the two members of a story pair belong to
the same topic. In topic tracking, the decision is whether a story in
the stream belongs to a particular topic. In all tasks, performance
is summarized in two ways: a detection cost function (C
Det
) and a
decision error tradeoff (DET) curve. Both are based on the rates
of two kinds of errors a detection system can make: misses, in
which the system gives a no answer where the correct answer is
yes, and false alarms, in which the system gives a yes answer
where the correct answer is no.
The DET curve plots the miss rate (P
Miss
) as a function of false
alarm rate (P
Fa
), as the yes/no decision threshold is swept through
its range. P
Miss
and P
Fa
are computed for each topic, and then
averaged across topics to yield topic-weighted curves. An example
can be seen in Figure 1 below. Better performance is indicated by
curves more to the lower left of the graph.
The detection cost function is computed for a particular threshold
as follows:
C
Det
= (C
Miss
* P
Miss
* P
Target
+ C
Fa
* P
Fa
* (1-P
Target
))
(9)
where: P
Miss
= #Misses / #Targets
P
Fa
= #False Alarms / #NonTargets
C
Miss
and C
Fa
are the costs of a missed detection and false alarm,
respectively, and are specified for the application, usually at 10
and 1, penalizing misses more than false alarms. P
Target
is the a
priori probability of finding a target, an item where the answer
should be yes, set by convention to 0.02.
The cost function is normalized:
(C
Det
)
Norm
= C
Det
/ MIN(C
Miss
* C
Target
, C
Fa
* (1-P
Target
)) (10)
and averaged over topics. Each point along the detection error
tradeoff curve has a value of (C
Det
)
Norm
. The minimum value found
on the curve is known as the min(C
Det
)
Norm
. It can be interpreted as
the value of C
Det
)
Norm
at the best possible threshold. This measure
allows us to separate performance on the task from the choice of
yes/no threshold. Lower cost scores indicate better performance.
More information about these measures can be found in
[5].
2.4
Language-specific Comparisons
English stories were lower-cased and stemmed using the kstem
stemmer
[6]. Stop words were removed. For native Arabic
comparisons, stories were converted from Unicode UTF-8 to windows
(CP1256) encoding, then normalized and stemmed with a
light stemmer
[7]. Stop words were removed. For native Mandarin
comparisons, overlapping character bigrams were compared.
STORY LINK DETECTION
In this section we present experimental results for story link detection
, comparing a native condition with an English baseline. In
the English baseline, all comparisons are in English, using machine
translation (MT) for Arabic and Mandarin stories. Corpus
statistics are computed incrementally for all the English and
translated-into-English stories. In the Native condition, two stories
originating in the same language are compared in that language
. Corpus statistics are computed incrementally for the stories
in the language of the comparison. Cross language pairs in the
native condition are compared in English using MT, as in the
baseline.
Figure 1: DET curve for TDT3 link detection based on English
versions of stories, or native language versions, for cosine and
relevance model similarity
404
Table 4: Min(C
det
)
Norm
for TDT3 story link detection
Similarity English
Native
Cosine
.3440 .2586
Relevance Model
.2625 .1900
Figure 1 shows the DET curves for the TDT3 story link detection
task, and Table 4 shows the minimum cost. The figure and table
show that native language comparisons (dotted) consistently outperform
comparisons based on machine-translated English (solid).
This difference holds both for the basic cosine similarity system
(first row) (black curves), and for the relevance modeling system
(second row) (gray curves). These results support the general
conclusion that when two stories originate in the same language, it
is better to carry out similarity comparisons in that language,
rather than translating them into a different language.
TOPIC TRACKING
In tracking, the system decides whether stories in a stream belong
to predefined topics. Similarity is measured between a topic
model and a story, rather than between two stories. The native
language hypothesis for tracking predicts better performance if
incoming stories are compared in their original language with
topic models in that language, and worse performance if translated
stories are compared with English topic models.
The hypothesis can only be tested indirectly, because Arabic and
Mandarin training stories were not available for all tracking topics
. In this first set of experiments, we chose to obtain native language
training stories from the stream of test stories using topic
adaptation, that is, gradual modification of topic models to incorporate
test stories that fit the topic particularly well.
Adaptation begins with the topic tracking scenario described
above in section
2.2, using a single model per topic based on a
small set of training stories in English. Each time a story is compared
to a topic model to determine whether it should be classed
as on-topic, it is also compared to a fixed adaptation threshold
ad
= 0.5 (not to be confused with the yes/no threshold mentioned
in section
2.2.1). If the similarity score is greater than
ad
, the
story is added to the topic set, and the topic model recomputed.
For clarity, we use the phrase topic set to refer to the set of stories
from which the topic model is built, which grows under adaptation
. The training set includes only the original N
t
training stories
for each topic. For cosine similarity, adaptation consists of
computing a new centroid for the topic set and pruning to 100
terms. For relevance modeling, a new topic model is computed
according to Equation (8). At most 100 stories are placed in each
topic set.
We have just described global adaptation, in which stories are
added to global topic models in English. Stories that originated in
Arabic or Mandarin are compared and added in their machine-translated
version.
Native adaptation differs from global adaptation in making separate
topic models for each source language. To decide whether a
test story should be added to a native topic set, the test story is
compared in its native language with the native model, and added
to the native topic set for that language if its similarity score exceeds
ad
. The English version of the story is also compared to the
global topic model, and if its similarity score exceeds
ad
, it is
added to the global topic set. (Global models continue to adapt
for other languages which may not yet have a native model, or for
smoothing, discussed later.)
At the start there are global topic models and native English topic
models based on the training stories, but no native Arabic or
Mandarin topic models. When there is not yet a native topic
model in the story's original language, the translated story is
compared to the global topic model. If the similarity exceeds
ad
,
the native topic model is initialized with the untranslated story.
Yes/no decisions for topic tracking can then be based on the untranslated
story's similarity to the native topic model if one exists.
If there is no native topic model yet for that language and topic,
the translated story is compared to the global topic model.
We have described three experimental conditions: global adapted,
native adapted, and a baseline. The baseline, described in Section
2.2, can also be called global unadapted. The baseline uses a
single English model per topic based on the small set of training
stories. A fourth possible condition, native unadapted is problematic
and not included here. There is no straightforward way to
initialize native language topic models without adaptation when
training stories are provided only in English.
Figure 2: DET curves for TDT3 tracking, cosine similarity
(above) and relevance models (below), N
t
=4 training stories,
global unadapted baseline, global adapted, and native adapted
405
Table 5: Min(C
det
)
Norm
for TDT3 topic tracking.
N
t
=2
N
t
=4
Adapted Adapted
Baseline
Global Native
Baseline
Global Native
Cosine
.1501 .1197 .1340 .1238 .1074
.1028
RM
.1283 .0892 .0966 .1060 .0818 .0934
The TDT3 tracking results on three conditions, replicated with the
two different similarity measures (cosine similarity and relevance
modeling) and two different training set sizes (N
t
=2 and 4) can be
seen in Table 5. DET curves for N
t
=4 are shown in Figure 2, for
cosine similarity (above) and relevance modeling (RM) (below).
Table 5 shows a robust adaptation effect for cosine and relevance
model experiments, and for 2 or 4 training stories. Native and
global adaptation are always better (lower cost) than baseline
unadapted tracking. In addition, relevance modeling produces
better results than cosine similarity. However, results do not show
the predicted advantage for native adapted topic models over
global adapted topic models. Only cosine similarity, N
t
=4, seems
to show the expected difference (shaded cells), but the difference
is very small. The DET curve in Figure 2 shows no sign of a native
language effect.
Table 6 shows minimum cost figures computed separately for
English, Mandarin, and Arabic test sets. Only English shows a
pattern similar to the composite results of Table 5 (see the shaded
cells). For cosine similarity, there is not much difference between
global and native English topic models. For relevance modeling,
Native English topic models are slightly worse than global models
. Arabic and Mandarin appear to show a native language advantage
for all cosine similarity conditions and most relevance
model conditions. However, DET curves comparing global and
native adapted models separately for English, Arabic, and Mandarin
, (Figure 3) show no real native language advantage.
Table 6: Min(C
det
)
Norm
for TDT3 topic tracking; breakdown by
original story language
English
N
t
=2
N
t
=4
Adapted Adapted
Baseline
Global Native
Baseline
Global Native
Cosine
.1177 .0930 .0977 .0903 .0736
.0713
RM
.1006 .0681 .0754 .0737 .0573 .0628
Arabic
Cosine
.2023
.1654
.1486 .1794 .1558
.1348
RM
.1884 .1356 .1404 .1581 .1206 .1377
Mandarin
Cosine
.2156
.1794
.1714 .1657 .1557
.1422
RM
.1829
.1272
.0991 .1286 .0935
.0847
Figure 3: DET curves for TDT3 tracking, cosine similarity,
N
t
=4 training stories, global adapted vs. native adapted
breakdown for English, Arabic, and Mandarin
In trying to account for the discrepancy between the findings on
link detection and tracking, we suspected that the root of the
problem was the quality of native models for Arabic and Mandarin
. For English, adaptation began with 2 or 4 on-topic models.
However, Mandarin and Arabic models did not begin with on-topic
stories; they could begin with off-topic models, which
should hurt tracking performance. A related issue is data sparseness
. When a native topic model is first formed, it is based on one
story, which is a poorer basis for tracking than N
t
stories. In the
next three sections we pursue different aspects of these suspicions.
In section
5 we perform a best-case experiment, initializing native
topic sets with on-topic stories, and smoothing native scores with
global scores to address the sparseness problem. If these conditions
do not show a native language advantage, we would reject
the native language hypothesis. In section
6 we explore the role of
the adaptation threshold. In section
7 we compare some additional
methods of initializing native language topic models.
ON-TOPIC NATIVE CENTROIDS
In this section, we consider a best-case scenario, where we take
the first N
t
stories in each language relevant to each topic, to initialize
adaptation of native topic models. While this is cheating,
and not a way to obtain native training documents in a realistic
tracking scenario, it demonstrates what performance can be attained
if native training documents are available. More realistic
approaches to adapting native topic models are considered in
subsequent sections.
The baseline and global adapted conditions were carried out as in
Section
4, and the native adapted condition was similar except in
the way adaptation of native topics began. If there were not yet N
t
native stories in the topic set for the current test story in its native
language, the story was added to the topic set if it was relevant.
Once a native topic model had N
t
stories, we switched to the usual
non-cheating mode of adaptation, based on similarity score and
adaptation threshold.
To address the data sparseness problem, we also smoothed the
native similarity scores with the global similarity scores:
406
)
,
(
)
1
(
)
,
(
)
,
(
S
T
Sim
S
T
Sim
S
T
Sim
global
native
smooth
+
=
(11)
The parameter was not tuned, but set to a fixed value of 0.5.
The results can be seen in Table 7. Shaded cell pairs indicate confirmation
of the native language hypothesis, where language-specific
topic models outperform global models.
Table 7: Min(C
det
)
Norm
for TDT3 topic tracking, using N
t
on-topic
native training stories and smoothing native scores
N
t
=2
N
t
=4
Adapted Adapted
Baseline
Global Native
Baseline
Global Native
Cosine
.1501
.1197
.0932 .1238 .1074
.0758
Rel.
.1283
.0892
.0702 .1060 .0818
.0611
Figure 4: DET curve for TDT3 tracking, initializing native
adaptation with relevant training stories during adaptation,
cosine similarity, N
t
=4
Figure 4 shows the DET curves for cosine, N
t
=4 case. When the
native models are initialized with on-topic stories, the advantage
to native models is clearly seen in the tracking performance.
Figure 5: DET curve for TDT3 tracking initializing native
adaptation with relevant training stories during adaptation
and smoothing, vs. global adaptation, cosine similarity, N
t
=4,
separate analyses for English, Arabic, and Mandarin.
DET curves showing results computed separately for the three
languages can be seen in Figure 5, for the cosine, N
t
=4 case. It can
be clearly seen that English tracking remains about the same but
the Arabic and Mandarin native tracking show a large native language
advantage.
ADAPTATION THRESHOLD
The adaptation threshold was set to 0.5 in the experiments described
above without any tuning. The increase in global tracking
performance after adaptation shows that the value is at least acceptable
. However, an analysis of the details of native adaptation
showed that many Arabic and Mandarin topics were not adapting.
A summary of some of this analysis can be seen in Table 8.
Table 8: Number of topics receiving new stories during native
adaptation, breakdown by language
Total
Topics receiving more stories
Similarity N
t
Topics English
Arabic Mandarin
2 36 24
8
11
Cosine
4 30 26
7
9
2 36 36
8
7
Relevance
Model
4 30 30
8
5
Fewer than a third of the topics received adapted stories. This
means that for most topics, native tracking was based on the
global models. In order to determine whether this was due to the
adaptation threshold, we performed an experiment varying the
adaptation threshold from .3 to .65 in steps of .05. The results can
be seen in Figure 6, which shows the minimum cost,
min(C
Det
)
Norm
, across the range of adaptation threshold values.
Although we see that the original threshold, .5, was not always the
optimal value, it is also clear that the pattern we saw at .5 (and in
Figure 6) does not change as the threshold is varied, that is tracking
with native topic models is not better than tracking with
global models. An improperly tuned adaptation threshold was
therefore not the reason that the native language hypothesis was
not confirmed for tracking. We suspect that different adaptation
thresholds may be needed for the different languages, but it would
be better to handle this problem by language-specific normalization
of similarity scores.
0.06
0.07
0.08
0.09
0.1
0.11
0.12
0.13
0.14
0.15
0.2
0.3
0.4
0.5
0.6
0.7
T hreshold
Mi
n
C
o
s
t
Global Nt=2
Global Nt=4
Native Nt=2
Native Nt=4
Relevance Model
0.06
0.07
0.08
0.09
0.1
0.11
0.12
0.13
0.14
0.15
0.2
0.3
0.4
0.5
0.6
0.7
T hreshold
Mi
n
C
o
s
t
Global Nt=2
Global Nt=4
Native Nt=2
Native Nt=4
Cosine Similarity
Figure 6: Effect of adaptation threshold on min(C
Det
)
Norm
on
TDT3 tracking with adaptation.
IMPROVING NATIVE TOPIC MODELS
In the previous two sections we showed that when native topic
models are initialized with language specific training stories that
are truly on-topic, then topic tracking is indeed better with native
models than with global models. However, in context of the TDT
407
test situation, the way we obtained our language-specific training
stories was cheating.
In this section we experiment with 2 different "legal" ways to initialize
better native language models: (1) Use both global and
native models, and smooth native similarity scores with global
similarity scores. (2) Initialize native models with dictionary or
other translations of the English training stories into the other
language.
Smoothing was carried out in the native adapted condition according
to Equation (11), setting =0.5, without tuning. The comparison
with unadapted and globally adapted tracking can be seen
in Table 9. The smoothing improves the native topic model performance
relative to unsmoothed native topic models (cf. Table
5), and brings the native model performance to roughly the same
level as the global. In other words, smoothing improves performance
, but we still do not have strong support for the native language
hypothesis. This is apparent in Figure 7. Native adapted
tracking is not better than global adapted tracking.
Table 9: Min(C
det
)
Norm
for TDT3 topic tracking, smoothing
native scores with global scores
N
t
=2
N
t
=4
Adapted Adapted
Baseline
Global Native
Smooth
Baseline
Global Native
Smooth
Cosine
.1501
.1197
.1125 .1238 .1074
.1010
RM
.1283
.0892
.0872 .1060 .0818 .0840
Figure 7: DET curve for TDT3 tracking with smoothing,
cosine similarity, N
t
=4 training stories
The final method of initializing topic models for different languages
would be to translate the English training stories into the
other languages required. We did not have machine translation
from English into Arabic or Mandarin available for these experiments
. However, we have had success with dictionary translations
for Arabic. In
[2] we found that dictionary translations from Arabic
into English resulted in comparable performance to the machine
translations on tracking, and better performance on link
detection. Such translated stories would not be "native language"
training stories, but might be a better starting point for language-specific
adaptation anyway.
Training story translations into Arabic used an English/Arabic
probabilistic dictionary derived from the Linguistic Data Consor-tium's
UN Arabic/English parallel corpus, developed for our
cross-language information retrieval work
[7]. Each English word
has many different Arabic translations, each with a translation
probability p(a|e). The Arabic words, but not the English words,
have been stemmed according to a light stemming algorithm. To
translate an English story, English stop words were removed, and
each English word occurrence was replaced by all of its dictionary
translations, weighted by their translation probabilities. Weights
were summed across all the occurrences of each Arabic word, and
the resulting Arabic term vector was truncated to retain only terms
above a threshold weight. We translated training stories only into
Arabic, because we did not have a method to produce good quality
English to Mandarin translation.
The results for Arabic can be seen in Table 10. For translation, it
makes sense to include an unadapted native condition, labeled
translated in the table.
Table 10: Min(C
det
)
Norm
for Arabic TDT3 topic tracking,
initializing native topic models with dictionary-translated
training stories
Arabic
N
t
=2
Unadapted Adapted
Baseline Translated Global Native
Cosine
.2023 .2219
.1694 .2209
RM
.1884
.1625 .1356
.1613
Arabic
N
t
=4
Cosine
.1794
.1640 .1558
1655
RM
.1581
.1316 .1206
.1325
Figure 8: DET curve for TDT3 tracking initializing native
topics with dictionary-translated training stories, cosine
similarity, N
t
=4, Arabic only
The results are mixed. First of all, this case is unusual in that
adaptation does not improve translated models. Further analysis
revealed that very little adaptation was taking place. Because of
this lack of native adaptation, global adaptation consistently outperformed
native adaptation here. However, in the unadapted
conditions, translated training stories outperformed the global
models for Arabic in three of the four cases - cosine N
t
=4 and
relevance models for N
t
=2 and N
t
=4 (the shaded baseline-trans-408
lated pairs in Table 10). The DET curve for the cosine N
t
=4 case
can be seen in Figure 8. The native unadapted curve is better
(lower) than the global unadapted curve.
The translated stories were very different from the test stories, so
their similarity scores almost always fell below the adaptation
threshold. We believe the need to normalize scores between native
stories and dictionary translations is part of the problem, but we
also need to investigate the compatibility of the dictionary translations
with the native Arabic stories.
CONCLUSIONS
We have confirmed the native language hypothesis for story link
detection. For topic tracking, the picture is more complicated.
When native language training stories are available, good native
language topic models can be built for tracking stories in their
original language. Smoothing the native models with global models
improves performance slightly. However, if training stories are
not available in the different languages, it is difficult to form native
models by adaptation or by translation of training stories,
which perform better than the adapted global models.
Why should language specific comparisons be more accurate than
comparisons based on machine translation? Machine translations
are not always good translations. If the translation distorts the
meaning of the original story, it is unlikely to be similar to the
topic model, particularly if proper names are incorrect, or spelled
differently in the machine translations than they are in the English
training stories, a common problem in English translations from
Mandarin or Arabic. Secondly, even if the translations are correct,
the choice of words, and hence the language models, are likely to
be different across languages. The second problem could be han-dled
by normalizing for source language, as in
[12]. But
normalization cannot compensate for poor translation.
We were surprised that translating the training stories into Arabic
to make Arabic topic models did not improve tracking, but again,
our dictionary based translations of the topic models were different
from native Arabic stories. We intend to try the same experiment
with manual translations of the training stories into Arabic
and Mandarin. We are also planning to investigate the best way
to normalize scores for different languages. When TDT4 relevance
judgments are available we intend to replicate some of
these experiments on TDT4 data.
ACKNOWLEDGMENTS
This work was supported in part by the Center for Intelligent Information
Retrieval and in part by SPAWARSYSCEN-SD grant
number N66001-02-1-8903. Any opinions, findings and conclusions
or recommendations expressed in this material are the author
(s) and do not necessarily reflect those of the sponsor.
REFERENCES
[1]
Allan, J. Introduction to topic detection and tracking. In
Topic detection and tracking: Event-based information organization
, J. Allan (ed.): Kluwer Academic Publishers, 1-16
, 2002.
[2]
Allan, J. Bolivar, A., Connell, M., Cronen-Townsend, S.,
Feng, A, Feng, F., Kumaran, G., Larkey, L., Lavrenko, V.,
Raghavan, H. UMass TDT 2003 Research Summary. In
Proceedings of TDT 2003 evaluation, unpublished, 2003.
[3]
Chen, H.-H. and Ku, L. W. An NLP & IR approach to topic
detection. In Topic detection and tracking: Event-based
information organization, J. Allan (ed.). Boston, MA: Kluwer
, 243-264, 2002.
[4]
Chen, Y.-J. and Chen, H.-H. Nlp and IR approaches to
monolingual and multilingual link detection. Presented at
Proceedings of 19th International Conference on Computa-tional
Linguistics, Taipei, Taiwan, 2002.
[5]
Fiscus, J. G. and Doddington, G. R. Topic detection and
tracking evaluation overview. In Topic detection and
tracking: Event-based information organization, J. Allan
(ed.). Boston, MA: Kluwer, 17-32, 2002.
[6]
Krovetz, R. Viewing morphology as an inference process.
In Proceedings of SIGIR '93, 191-203, 1993.
[7]
Larkey, Leah S. and Connell, Margaret E. (2003) Structured
Queries, Language Modeling, and Relevance Modeling in
Cross-Language Information Retrieval. To appear in
Information Processing and Management Special Issue on
Cross Language Information Retrieval, 2003.
[8]
Larkey, L. S., Ballesteros, L., and Connell, M. E. Improving
stemming for Arabic information retrieval: Light stemming
and co-occurrence analysis. In Proceedings of SIGIR 2002,
275-282, 2002.
[9]
Lavrenko, V. and Croft, W. B. Relevance-based language
models. In Proceedings of SIGIR 2001. New Orleans:
ACM, 120-127, 2001.
[10]
Lavrenko, V. and Croft, W. B. Relevance models in
information retrieval. In Language modeling for information
retrieval, W. B. Croft and J. Lafferty (eds.). Boston:
Kluwer, 11-56, 2003.
[11]
Lavrenko, V., Allan, J., DeGuzman, E., LaFlamme, D., Pollard
, V., and Thomas, S. Relevance models for topic detection
and tracking. In Proceedings of the Conference on
Human Language Technology, 104-110, 2002.
[12]
Leek, T., Schwartz, R. M., and Sista, S. Probabilistic approaches
to topic detection and tracking. In Topic detection
and tracking: Event-based information organization, J.
Allan (ed.). Boston, MA: Kluwer, 67-83, 2002.
[13]
Levow, G.-A. and Oard, D. W. Signal boosting for translin-gual
topic tracking: Document expansion and n-best translation
. In Topic detection and tracking: Event-based information
organization, J. Allan (ed.). Boston, MA: Kluwer,
175-195, 2002.
[14]
Oard, D. W. Adaptive vector space text filtering for
monolingual and cross-language applications. PhD dissertation
, University of Maryland, College Park, 1996.
http://www.glue.umd.edu/~dlrg/filter/papers/thesis.ps.gz
409 | topic models;;classification;crosslingual;native topic models;similarity;story link;topic tracking;native language hypothesis;multilingual topic tracking;multilingual;Arabic;TDT;machine translation |
122 | Lazy Preservation: Reconstructing Websites by Crawling the Crawlers | Backup of websites is often not considered until after a catastrophic event has occurred to either the website or its webmaster. We introduce "lazy preservation" digital preservation performed as a result of the normal operation of web crawlers and caches. Lazy preservation is especially suitable for third parties; for example, a teacher reconstructing a missing website used in previous classes. We evaluate the effectiveness of lazy preservation by reconstructing 24 websites of varying sizes and composition using Warrick, a web-repository crawler. Because of varying levels of completeness in any one repository, our reconstructions sampled from four different web repositories: Google (44%), MSN (30%), Internet Archive (19%) and Yahoo (7%). We also measured the time required for web resources to be discovered and cached (10-103 days) as well as how long they remained in cache after deletion (7-61 days). | INTRODUCTION
"My old web hosting company lost my site in its
entirety (duh!) when a hard drive died on them.
Needless to say that I was peeved, but I do notice
that it is available to browse on the wayback
machine... Does anyone have any ideas if I can
download my full site?" - A request for help at
archive.org [25]
Websites may be lost for a number of reasons: hard drive
crashes, file system failures, viruses, hacking, etc. A lost
website may be restored if care was taken to create a backup
beforehand, but sometimes webmasters are negligent in backing
up their websites, and in cases such as fire, flooding, or
death of the website owner, backups are frequently unavailable
. In these cases, webmasters and third parties may turn
to the Internet Archive (IA) "Wayback Machine" for help.
According to a representative from IA, they have performed
over 200 website recoveries in the past year for various individuals
. Although IA is often helpful, it is strictly a best-effort
approach that performs sporadic, incomplete and slow
crawls of the Web (IA is at least 6 months out-of-date [16]).
Another source of missing web content is in the caches of
search engines (SEs) like Google, MSN and Yahoo that scour
the Web looking for content to index. Unfortunately, the
SEs do not preserve canonical copies of all the web resources
they cache, and it is assumed that the SEs do not keep web
pages long after they have been removed from a web server.
We define lazy preservation as the collective digital preservation
performed by web archives and search engines on behalf
of the Web at large. It exists as a preservation service
on top of distributed, incomplete, and potentially unreliable
web repositories. Lazy preservation requires no individual
effort or cost for Web publishers, but it also provides no
quality of service guarantees. We explore the effectiveness
of lazy preservation by downloading 24 websites of various
sizes and subject matter and reconstructing them using a
web-repository crawler named Warrick
1
which recovers missing
resources from four web repositories (IA, Google, MSN
and Yahoo). We compare the downloaded versions of the
sites with the reconstructed versions to measure how successful
we were at reconstructing the websites.
We also measure the time it takes for SEs to crawl and
cache web pages that we have created on .com and .edu websites
. In June 2005, we created four synthetic web collections
consisting of HTML, PDF and images. For 90 days we systematically
removed web pages and measured how long they
remained cached by the SEs.
BACKGROUND AND RELATED WORK
The ephemeral nature of the Web has been widely ac-knowledged
. To combat the disappearance of web resources,
Brewster Kahle's Internet Archive has been archiving the
1
Warrick is named after a fictional forensic scientist with a
penchant for gambling.
67
Table 1: Web repository-supported data types
Type
G
Y
M
IA
HTML
C
C
C
C
Plain text
M
M
M
C
GIF, PNG, JPG
M
M
R
C
JavaScript
M
M
C
MS Excel
M
S
M
C
MS PowerPoint
M
M
M
C
MS Word
M
M
M
C
PDF
M
M
M
C
PostScript
M
S
C
C = Canonical version is stored
M = Modified version is stored (image thumbnails or
HTML conversions)
R = Stored but not retrievable with direct URL
S = Indexed but stored version is not accessible
Web since 1996 [4]. National libraries are also actively engaged
in archiving culturally important websites [8]. Systems
like LOCKSS [24] have been developed to ensure libraries
have long-term access to publishers' web content,
and commercial systems like Spurl.net and HanzoWeb.com
have been developed to allow users to archive selected web
resources that they deem important.
Other researchers have developed tools for archiving individual
websites and web pages. InfoMonitor [7] archives a
website's file system and stores the archive remotely. TTA-pache
[9] is used to archive requested pages from a particular
web server, and iPROXY [23] is used as a proxy server to
archive requested pages from a variety of web servers. In
many cases these services can be of some value for recovering
a lost website, but they are largely useless when backups
are inaccessible or destroyed or when a third party wants to
reconstruct a website. They also require the webmaster to
perform some amount of work in setting up, configuring and
monitoring the systems.
In regards to commercial search engines, the literature
has mostly focused on measuring the amount of content
they have indexed (e.g., [15, 18]), relevance of responses to
users' queries (e.g., [5, 14]), and ranking of pages (e.g., [28]).
Lewandowski et al.
[17] studied how frequently Google,
MSN and Yahoo updated their cached versions of web pages,
but we are unaware of any research that attempts to measure
how quickly new resources are added to and removed
from commercial SE caches, or research that explores the
use of SE caches for reconstructing websites.
WEB CRAWLING AND CACHING
There are many SEs and web archives that index and
store Web content. For them to be useful for website reconstruction
, they must at a minimum provide a way to map
a given URL to a stored resource. To limit the implemen-tation
complexity, we have focused on what we consider to
be the four most popular web repositories that meet our
minimum criteria. Recent measurements show that Google,
MSN and Yahoo index significantly different portions of the
Web and have an intersection of less than 45% [15]. Adding
additional web repositories like ask.com, gigablast.com, in-cywincy
.com and any other web repository that allows direct
URL retrieval would likely increase our ability to reconstruct
websites.
t
d
t
a
t
r
t
p
TTL
c
SE cache
t
m
TTL
ws
t
0
vulnerable replicated endangered unrecoverable
Web server
Figure 1: Timeline of SE resource acquisition and
release
Although SEs often publish index size estimates, it is difficult
to estimate the number of resources in each SE cache.
An HTML web page may consist of numerous web resources
(e.g., images, applets, etc.) that may not be counted in the
estimates, and not all indexed resources are stored in the SE
caches. Google, MSN and Yahoo will not cache an HTML
page if it contains a NOARCHIVE meta-tag, and the http
Cache-control directives `no-cache' and `no-store' may also
prevent caching of resources [1].
Only IA stores web resources indefinitely. The SEs have
proprietary cache replacement and removal policies which
can only be inferred from observed behavior. All four web
repositories perform sporadic and incomplete crawls of websites
making their aggregate performance important for website
reconstruction.
Table 1 shows the most popular types of resources held
by the four web repositories. This table is based on our
observations when reconstructing websites with a variety of
content. IA keeps a canonical version of all web resources,
but SEs only keep canonical versions of HTML pages. When
adding PDF, PostScript and Microsoft Office (Word, Excel,
PowerPoint) resources to their cache, the SEs create HTML
versions of the resources which are stripped of all images.
SEs also keep only a thumbnail version of the images they
cache due to copyright law. MSN uses Picsearch for their
image crawling; unfortunately, Picsearch and MSN do not
support direct URL queries for accessing these images, so
they cannot be used for recovering website images.
3.2
Lifetime of a Web Resource
Figure 1 illustrates the life span of a web resource from
when it is first made available on a web server to when when
it is finally purged from a SE cache. A web resource's time-to
-live on the web server (TTL
ws
) is defined as the number
of days from when the resource is first made accessible on
the server (
t
0
) to when it is removed (
t
r
).
A new resource is vulnerable until it is discovered by a
SE (
t
d
) and made available in the SE cache (
t
a
). The resource
is replicated when it is accessible on the web server
and in cache. Once the resource is removed from the web
server (
t
r
), it becomes endangered since it is only accessible
in cache. When a subsequent crawl reveals the resource is
no longer available on the web server (
t
m
), it will then be
purged from cache (
t
p
) and be unrecoverable. The period
between
t
a
and
t
p
define a resource's time-to-live in the SE
cache (TTL
c
). A resource is recoverable if it is currently
cached (i.e., is replicated or endangered). A recoverable resource
can only be recovered during the TTL
c
period with
a probability of
P
r
, the observed number of days that a resource
is retrievable from the cache divided by TTL
c
.
68
It should be noted that the TTL
ws
and TTL
c
values of a
resource may not necessarily overlap. A SE that is trying
to maximize the freshness of its index will try to minimize
the difference between TTL
ws
and TTL
c
. A SE that is slow
in updating its index, perhaps because it obtains crawling
data from a third party, may experience late caching where
t
r
< t
a
.
For a website to be lazily preserved, we would like its resources
to be cached soon after their appearance on a website
(have minimal vulnerability). SEs may also share this goal
if they want to index newly discovered content as quickly as
possible. Inducing a SE to crawl a website at a specific time
is not currently possible. Webmasters may employ various
techniques to ensure their websites are crawler-friendly [13,
27] and well connected to the Web. They may even submit
their website URLs to SEs or use proprietary mechanisms
like Google's Sitemap Protocol [12], but no technique will
guarantee immediate indexing and caching of a website.
We would also like resources to remain cached long after
they have been deleted from the web server (remain endangered
) so they can be recovered for many days after their
disappearance. SEs on the other hand may want to minimize
the endangered period in order to purge missing content
from their index. Just as we have no control as to when
a SE crawler will visit, we also have no control over cache
eviction policies.
3.3
Web Collection Design
In order to obtain measurements for TTL
c
and other values
in Figure 1, we created four synthetic web collections and
placed them on websites for which we could obtain crawling
data. We deployed the collections in June 2005 at four
different locations: 1) www.owenbrau.com, 2) www.cs.odu.
edu/
fmccown/lazy/ 3) www.cs.odu.edu/
jsmit/, and 4)
www.cs.odu.edu/
mln/lazyp/. The .com website was new
and had never been indexed by Google, Yahoo or MSN.
The 3 .edu websites had existed for over a year and had
been previously crawled by all three SEs. In order for the
web collections to be found by the SEs, we placed links to
the root of each web collection from the .edu websites, and
we submitted owenbrau's base URL to Google, MSN and
Yahoo 1 month prior to the experiment. For 90 days we
systematically removed resources from each collection. We
examined the server web logs to determine when resources
were crawled, and we queried Google, MSN and Yahoo daily
to determine when the resources were cached.
We organized each web collection into a series of 30 update
bins (directories) which contained a number of HTML
pages referencing the same three inline images (GIF, JPG
and PNG) and a number of PDF files. An index.html file
(with a single inline image) in the root of the web collection
pointed to each of the bins. An index.html file in each bin
pointed to the HTML pages and PDF files so a web crawler
could easily find all the resources. All these files were static
and did not change throughout the 90 day period except
the index.html files in each bin which were modified when
links to deleted web pages were removed. In all, there were
381 HTML files, 350 PDF files, and 223 images in each web
collection. More detail about the organization of the web
collections and what the pages and images looked like can
be found in [20, 26].
The PDF and HTML pages were made to look like typical
web pages with around 120 words per page. The text for
each page was randomly generated from a standard English
dictionary. By using random words we avoided creating duplicate
pages that a SE may reject [6]. Unfortunately, using
random words may cause pages to be flagged as spam [10].
Each HTML and PDF page contained a unique identifier
(UID) at the top of each page (e.g., `mlnODULPT2 dgrp18
pg18-2-pdf' that included 4 identifiers: the web collection
(e.g., `mlnODULPT2' means the `mln' collection), bin number
(e.g., `dgrp18' means bin 18), page number and resource
type (e.g., `pg18-2-pdf' means page number 2 from bin 18
and PDF resource). The UID contains spaces to allow for
more efficient querying of the SE caches.
The TTL
ws
for each resource in the web collection is a
function of its bin number
b and page number p:
TTL
ws
=
b( 90/b - p + 1)
(1)
3.4
Daily SE Queries
In designing our daily SE queries, care was taken to perform
a limited number of daily queries to not overburden
the SEs. We could have queried the SEs using the URL for
each resource, but this might have led to our resources being
cached prematurely; it is possible that if a SE is queried
for a URL it did not index that it would add the URL to a
list of URLs to be crawled at a later date. This is how IA's
advanced search interface handles missing URLs from users'
queries.
To determine which HTML and PDF resources had been
cached, we queried using subsets of the resources' UIDs
and looked for cached URLs in the results pages. For example
, to find PDF resources from the mln collection, we
queried each SE to return the top 100 PDF results from
the site www.cs.odu.edu that contain the exact phrase `mlnODULPT2
dgrp18'.
2
It is necessary to divulge the site in
the query or multiple results from the site will not be returned
. Although this tells the SE on which site the resource
is located, it does not divulge the URL of the resource. To
query for cached images, we queried for the globally unique
filename given to each image.
3.5
Crawling and Caching Observations
Although the web server logs registered visits from a variety
of crawlers, we report only on crawls from Google,
Inktomi (Yahoo) and MSN.
3
Alexa Internet (who provides
crawls to IA) only accessed our collection once (induced
through our use of the Alexa toolbar). A separate IA robot
accessed less than 1% of the collections, likely due to
several submissions we made to their Wayback Machine's
advanced search interface early in the experiment. Further
analysis of the log data can seen in a companion paper [26].
We report only detailed measurements on HTML resources
(PDF resources were similar).
Images were crawled and
cached far less frequently; Google and Picsearch (the MSN
Images provider) were the only ones to crawl a significant
number of images. The 3 .edu collections had 29% of their
images crawled, and owenbrau had 14% of its images crawled.
Only 4 unique images appeared in Google Images, all from
2
MSN only allows limiting the results page to 50.
3
Due to a technical mishap beyond our control, we were
unable to obtain crawling data for days 41-55 for owebrau
and parts of days 66-75 and 97 for the .edu web collections.
We were also prevented from making cache queries on days
53, 54, 86 and 87.
69
Table 2: Caching of HTML resources from 4 web collections (350 HTML resources in each collection)
Web
% URLs crawled
% URLs cached
t
ca
T T L
c
/
P
r
Endangered
collection
G
M
Y
G
M
Y
G
M
Y
G
M
Y
G
M
Y
fmccown
91
41
56
91
16
36
13
65
47
90 / 0.78
20 / 0.87
35 / 0.57
51
9
24
jsmit
92
31
92
92
14
65
12
66
47
86 / 0.82
20 / 0.91
36 / 0.55
47
7
25
mln
94
33
84
94
14
49
10
65
54
87 / 0.83
21 / 0.90
24 / 0.46
47
8
19
owenbrau
18
0
0
20
0
0
103
N/A
N/A
40 / 0.98
N/A
N/A
61
N/A
N/A
Ave
74
26
58
74
11
37
35
66
50
76 / 0.86
20 / 0.89
32 / 0.53
51
8
23
Figure 2: Crawling (top) and caching (bottom) of
HTML resources from the mln web collection
the mln collection. Google likely used an image duplication
detection algorithm to prevent duplicate images from
different URLs from being cached. Only one image (from
fmccown) appeared in MSN Images. None of the cached
images fell out of cache during our experiment.
Table 2 summarizes the performance of each SE to crawl
and cache 350 HTML resources from each of the four web
collections. This table does not include index.html resources
which had an infinite
T T L
ws
. We believe there was an error
in the MSN query script which caused fewer resources to
be found in the MSN cache, but the percentage of crawled
URLs provides an upper bound on the number of cached
resources; this has little to no effect on the other measurements
reported.
The three SEs showed equal desire to crawl HTML and
PDF resources. Inktomi (Yahoo) crawled 2 times as many
resources as MSN, and Google crawled almost 3 times as
many resources than MSN. Google was the only SE to crawl
and cache any resources from the new owenbrau website.
From a preservation perspective, Google out-performed
MSN and Yahoo in nearly every category. Google cached
the highest percentage of HTML resources (76%) and took
only 12 days on average to cache new resources from the
edu web collections. On average, Google cached HTML resources
for the longest period of time (76 days), consistently
provided access to the cached resources (86%), and were the
slowest to remove cached resources that were deleted from
the web server (51 days).
Although Yahoo cached more
HTML resources and kept the resources cached for a longer
period than MSN, the probability of accessing a resource on
any given day was only 53% compared to 89% for MSN.
Figure 2 provides an interesting look at the crawling and
caching behavior of Google, Yahoo and MSN. These graphs
illustrate the crawling and caching of HTML resources from
the mln collection; the other two edu collections exhibited
similar behavior. The resources are sorted by TTL
ws
with
the longest-living resources appearing on the bottom. The
index.html files which were never removed from the web collection
have an infinite TTL (`inf'). The red diagonal line
indicates the decay of the web collection; on any particular
day, only resources below the red line were accessible
from the web server. On the top row of Figure 2, blue dots
indicate resources that were crawled on a particular day.
When resources were requested that had been deleted, the
web server responded with a 404 (not found) code represented
by green dots above the red line. The bottom row of
Figure 2 shows the cached HTML resources (blue) resulting
from the crawls. Some pages in Yahoo were indexed but not
cached (green).
As Figure 2 illustrates, both Google and MSN were quick
to make resources available in their cache soon after they
were crawled, and they were quick to purge resources from
their cache when a crawl revealed the resources were no
longer available on the web server.
A surprising finding
is that many of the HTML resources that were previously
purged from Google's cache reappeared on day 102 and remained
cached for the remainder of our experiment. The
other two edu collections exhibited similar behavior for HTML
resources. HTML and PDF resources from owenbrau appeared
in the Google cache on day 102 for the first time;
these resources had been deleted from the web server 10-20
days before day 102. Manual inspection weeks after the experiment
had concluded revealed that the pages remained
in Google's cache and fell out months later.
Yahoo was very sporadic in caching resources; there was
often a lag time of 30 days between the crawl of a resource
and its appearance in cache. Many of the crawled resources
never appeared in Yahoo's cache. Although Inktomi crawled
nearly every available HTML resource on day 10, only half
of those resources ever became available in the Yahoo cache.
We have observed through subsequent interaction with Yahoo
that links to cached content may appear and disappear
when performing the same query just a few seconds apart.
This likely accounts for the observed cache inconsistency.
We have observed from our measurements that nearly all
new HTML and PDF resources that we placed on known
websites were crawled and cached by Google several days af-70
A
A
D
B
C
E
F G
B'
C'
E
F
added
20%
W
'
W
changed
33%
identical
50%
missing
17%
Figure 3: Lost website (left), reconstructed website
(center), and reconstruction diagram (right)
ter they were discovered. Resources on a new website were
not cached for months.
Yahoo and MSN were 4-5 times
slower than Google to acquire new resources, and Yahoo incurs
a long transfer delay from Inktomi's crawls into their
cache.
We have also observed that cached resources are
often purged from all three caches as soon as a crawl reveals
the resources are missing, but in the case of Google,
many HTML resources have reappeared weeks after being
removed. Images tend to be largely ignored.
Search engines may crawl and cache other websites differ-ently
depending on a variety of factors including perceived
level of importance (e.g., PageRank) and modification rates.
Crawling policies may also be changed over time. This experiment
merely provides a glimpse into the current caching
behavior of the top three SEs that has not been documented
before. Our findings suggest that SEs vary greatly in the
level of access they provide to cached resources, and that
websites are likely to be reconstructed more successfully if
they are reconstructed quickly after being lost. Reconstructions
should also be performed several days in a row to ensure
maximum access to web repository holdings. In some
cases, it may even be beneficial to attempt recovering resources
even a month after they have been lost.
RECONSTRUCTING WEBSITES
We define a reconstructed website to be the collection
of recovered resources that share the same URIs as the resources
from a lost website or from some previous version of
the lost website [19]. The recovered resources may be equivalent
to, or very different from, the lost resources. For websites
that are composed of static files, recovered resources
would be equivalent to the files that were lost. For sites
produced dynamically using CGI, PHP, etc., the recovered
resources would match the client's view of the resources and
would be useful to the webmaster in rebuilding the server-side
components. The server-side components are currently
not recoverable using lazy preservation (see Section 5).
To quantify the difference between a reconstructed website
and a lost website, we classify the recovered resources from
the website graphs. A website can be represented as a graph
G = (V, E) where each resource r
i
(HTML, PDF, image,
etc.), identified by a URI, is a node
v
i
, and there exists
a directed edge from
v
i
to
v
j
when there is a hyperlink or
reference from
r
i
to
r
j
. The left side of Figure 3 shows a web
graph for some website
W if we began to crawl it starting
at A. Suppose
W was lost and reconstructed forming the
website
W represented in the center of Figure 3.
For each resource
r
i
in
W we may examine its corresponding
resource
r
i
in
W that shares the same URI and categorize
r
i
as identical (
r
i
is byte-for-byte identical to
r
i
),
changed (
r
i
is not identical to
r
i
), or missing (
r
i
could not
be found in any web). We would categorize those resources
in
W that did not share a URI with any resource in W
as added (
r
i
was not a part of the current website but was
recovered due to a reference from
r
j
).
Figure 3 shows that resources A, G and E were reconstructed
and are identical to their lost versions. An older
version of B was found (B') that pointed to G, a resource
that does not currently exist in
W . Since B' does not reference
D, we did not know to recover it (it is possible that
G is actually D renamed). An older version of C was found,
and although it still references F, F could not be found in
any web repository.
A measure of change between the lost website
W and the
reconstructed website
W can be described using the following
difference vector:
difference(
W, W ) =
R
changed
|W | ,
R
missing
|W | ,
R
added
|W |
(2)
For Figure 3, the difference vector is (2/6, 1/6, 1/5) =
(0.333, 0.167, 0.2). The best case scenario would be (0,0,0),
the complete reconstruction of a website. A completely unrecoverable
website would have a difference vector of (0,1,0).
The difference vector for a reconstructed website can be
illustrated as a reconstruction diagram as shown on the
right side of Figure 3. The changed, identical and missing
resources form the core of the reconstructed website. The
dark gray portion of the core grows as the percentage of
changed resource increases. The hole in the center of the
core grows as the percentage of missing resources increases.
The added resources appear as crust around the core. This
representation will be used later in Table 3 when we report
on the websites we reconstructed in our experiments.
4.2
Warrick Operation
Warrick, our web-repository crawler, is able to reconstruct
a website when given a base URL pointing to where the site
used to exist. The web repositories are crawled by issuing
queries in the form of URLs to access their stored holdings
. For example, Google's cached version of http://foo.
edu/page1.html can be accessed like so: http://search.
google.com/search?q=cache:http://foo.edu/page1.html.
If Google has not cached the page, an error page will be generated
. Otherwise the cached page can be stripped of any
Google-added HTML, and the page can be parsed for links
to other resources from the foo.edu domain (and other domains
if necessary). Most repositories require two or more
queries to obtain a resource.
For each URL, the file extension (if present) is examined
to determine if the URL is an image (.png, .gif, .jpg, etc.)
or other resource type. All three SEs use a different method
for retrieving images than for other resource types. IA has
the same interface regardless of the type. We would have
better accuracy at determining if a given URL referenced
an image or not if we knew the URL's resource MIME type,
but this information is not available to us.
IA is the first web repository queried by Warrick because
it keeps a canonical version of all web resources.
When
querying for an image URL, if IA does not have the image
then Google and Yahoo are queried one at a time until one of
them returns an image. Google and Yahoo do not publicize
the cached date of their images, so it is not possible to pick
the most recently cached image.
71
Table 3: Results of website reconstructions
MIME type groupings (orig/recovered)
Difference vector
Website
PR
Total
HTML
Images
Other
(Changed, Missing,
Added)
Recon
diag
Almost
identical
New
recon
diag
1. www.eskimo.com/~scs/
6
719/691
96%
696/669
96%
22/21
95%
1/1
100%
(0.011, 0.039, 0.001)
50%
2. www.digitalpreservation.gov
8
414/378
91%
346/329
95%
42/25
60%
26/24
92%
(0.097, 0.087, 0.000)
44%
3. www.harding.edu/hr/
4
73/47
64%
19/19
100%
25/2
8%
29/26
90%
(0.438, 0.356, 0.145)
83%
4. www.techlocker.com
4
1216/406
33%
687/149
22%
529/257
49%
0/0
(0.267, 0.666, 0.175)
99%
If a non-image resource is being retrieved, again IA is
queried first. If IA has the resource and the resource does
not have a MIME type of `text/html', then the SEs are not
queried since they only store canonical versions of HTML
resources. If the resource does have a `text/html' MIME
type (or IA did not have a copy), then all three SEs are
queried, the cache dates of the resources are compared (if
available), and the most recent resource is chosen.
Warrick will search HTML resources for URLs to other
resources and add them to the crawl frontier (a queue). Resources
are recovered in breadth-first order, and reconstruction
continues until the frontier is empty. All recovered resources
are stored on the local filesystem, and a log is kept of
recovered and missing resources. Warrick limits its requests
per day to the web repositories based on their published API
values (Google, 1000; Yahoo, 5000; MSN, 10,000) or lacking
an API, our best guess (IA, 1000). If any repository's limit
is exceeded, Warrick will checkpoint and sleep for 24 hours.
4.3
Reconstruction Experiment and Results
To gauge the effectiveness of lazy preservation for website
reconstruction, we compared the snap-shot of 24 live websites
with their reconstructions. We chose sites that were
either personally known to us or randomly sampled from
dmoz.org. The websites (some were actually subsites) were
predominantly English, covered a range of topics, and were
from a number of top-level domains. We chose 8 small (
<150
URIs), 8 medium (150-499 URIs) and 8 large (
500 URIs)
websites, and we avoided websites that used robots.txt and
Flash exclusively as the main interface.
In August 2005 we downloaded all 24 websites by starting
at the base URL and following all links and references that
that were in and beneath the starting directory, with no limit
to the path depth. For simplicity, we restricted the download
to port 80 and did not follow links to other hosts within
the same domain name. So if the base URL for the website
was http://www.foo.edu/bar/, only URLs matching http:
//www.foo.edu/bar/* were downloaded. Warrick uses the
same default setting for reconstructing websites.
Immediately after downloading the websites, we reconstructed
five different versions for each of the 24 websites:
four using each web repository separately, and one using
all web repositories together. The different reconstructions
helped to show how effective individual web repositories
could reconstruct a website versus the aggregate of all four
web repositories.
We present 4 of the 24 results of the aggregate reconstructions
in Table 3, ordered by percent of recovered URIs.
The complete results can be seen in [20].
The `PR' column
is Google's PageRank (0-10 with 10 being the most
important) for the root page of each website at the time of
the experiments. (MSN and Yahoo do not publicly disclose
their `importance' metric.) For each website, the total number
of resources in the website is shown along with the total
number of resources that were recovered and the percentage.
Resources are also totalled by MIME type. The difference
vector for the website accounts for recovered files that were
added.
The `Almost identical' column of Table 3 shows the percentage
of text-based resources (e.g., HTML, PDF, PostScript
, Word, PowerPoint, Excel) that were almost identical
to the originals. The last column shows the reconstruction
figure for each website if these almost identical resources are
moved from the `Changed' category to `Identical' category.
We considered two text-based resources to be almost identical
if they shared at least 75% of their shingles of size 10.
Shingling (as proposed by Broder et al. [3]) is a popular
method for quantifying similarity of text documents when
word-order is important [2, 11, 21]. We did not use any
image similarity metrics.
We were able to recover more than 90% of the original resources
from a quarter of the 24 websites. For three quarters
of the websites we recovered more than half of the resources.
On average we were able to recover 68% of the website resources
(median=72%). Of those resources recovered, 30%
of them on average were not byte-for-byte identical. A majority
(72%) of the `changed' text-based files were almost
identical to the originals (having 75% of their shingles in
common). 67% of the 24 websites had obtained additional
files when reconstructed which accounted for 7% of the total
number of files reconstructed per website.
When all website resources are aggregated together and
examined, dynamic pages (those that contained a `?'
in
the URL) were significantly less likely to be recovered than
resources that did not have a query string (11% vs. 73%).
URLs with a path depth greater than three were also less
likely to be recovered (52% vs. 61%). A chi-square analysis
confirms the significance of these findings (p
< .001). We
were unable to find any correlation between percentage of
recovered resources with PageRank or website size.
The success of recovering resources based on their MIME
type is plotted in Figure 4. The percentage of resources
72
0
25
50
75
100
125
150
175
200
225
html
images
pdf
other
ms
MIME type groups
Nu
m
b
er o
f
reso
u
r
ces
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Ave # of resources
in original websites
Aggregate % recon
IA % recon
Google % recon
MSN % recon
Yahoo! % recon
Figure 4: Recovery success by MIME type
that were recovered from the five different website reconstructions
we performed (one using all four web repositories
, and four using each web repository individually) are
shown along with the average number of resources making
up the 24 downloaded (or original) websites. A majority
(92%) of the resources making up the original websites are
HTML and images. We were much more successful at recovering
HTML resources than images; we recovered 100%
of the HTML resources for 9 of the websites (38%) using
all four web repositories. It is likely we recovered fewer images
because MSN cannot be used to recover images, and as
our caching experiment revealed, images are also much less
likely to be cached than other resource types.
Figure 4 also emphasizes the importance of using all four
web repositories when reconstructing a website. By just using
IA or just using Google, many resources will not be
recovered.
This is further illustrated by Figure 5 which
shows the percentage of each web repository's contribution
in the aggregate reconstructions (sites are ordered by number
of URIs). Although Google was the largest overall contributor
to the website reconstructions (providing 44% of
the resources) they provided none of the resources for site
17 and provided less than 30% of the resources for 9 of
the reconstructions. MSN contributed on average 30% of
the resources; IA was third with 19%, and Yahoo was last
with a 7% contribution rate.
Yahoo's poor contribution
rate is likely due to their spotty cache access as exhibited
in our caching experiment (Figure 2) and because last-modified
datestamps are frequently older than last-cached
datestamps (Warrick chooses resources with the most recent
datestamps).
The amount of time and the number of queries required
to reconstruct all 24 websites (using all 4 repositories) is
shown in Figure 6. Here we see almost a 1:1 ratio of queries
to seconds. Although the size of the original websites gets
larger along the x-axis, the number of files reconstructed
and the number of resources held in each web repository
determine how many queries are performed. In none of our
reconstructions did we exceed the daily query limit of any
of the web repositories.
FUTURE WORK
We have made Warrick available on the Web
4
, and it has
been used to reconstruct several websites have been lost
due to fire, hard-drive crashes, death of the website owner,
4
http://www.cs.odu.edu/
fmccown/warrick/
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Reconstructed w ebsites
C
ont
r
i
but
i
o
n
Yahoo
IA
MSN
Google
Figure 5:
Web repositories contributing to each
website reconstruction
0
1000
2000
3000
4000
5000
6000
7000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 1516 17 18 19 20 2122 23 24
Reconstructed W ebsites
N
u
m
b
e
r
of
que
r
i
e
s
0
1000
2000
3000
4000
5000
6000
7000
Ti
m
e
(
s
e
c
)
queries
time
Figure 6: Number of queries performed and time
taken to reconstruct websites
hacking, and discontinued charitable website hosting [19].
Although the reconstructions have not been complete, individuals
are very thankful to have recovered any resources at
all when faced with total loss.
There are numerous improvements we are making to Warrick
including an API for easier inclusion of new web repositories
and new methods for discovering more resources within
a web repository [19]. We are planning on reconstructing a
larger sample from the Web to discover the website characteristics
that allow for more effective "lazy recovery". Discovering
such characteristics will allow us to create guidelines
for webmasters to ensure better lazy preservation of
their sites. Our next experiment will take into account rate
of change and reconstruction differences over time.
We are also interested in recovering the server-side components
(CGI programs, databases, etc.) of a lost website.
We are investigating methods to inject server-side components
into indexable content using erasure codes (popular
with RAID systems [22]) so they can be recovered from web
repositories when only a subset of pages can be found.
A web-repository crawler could be used in the future to
safeguard websites that are at risk of being lost. When a
website is detected as being lost, a reconstruction could be
initiated to preserve what is left of the site. Additionally,
websites in countries that are targeted by political censorship
could be reconstructed at safe locations.
CONCLUSIONS
Lazy preservation is a best-effort, wide-coverage digital
preservation service that may be used as a last resort when
73
website backups are unavailable. It is not a substitute for
digital preservation infrastructure and policy. Web repositories
may not crawl orphan pages, protected pages (e.g.,
robots.txt, password, IP), very large pages, pages deep in
a web collection or links influenced by JavaScript, Flash or
session IDs. If a web repository will not or cannot crawl and
cache a resource, it cannot be recovered.
We have measured the ability of Google, MSN and Yahoo
to cache four synthetic web collections over a period of
four months. We measured web resources to be vulnerable
for as little as 10 days and in the worst case, as long as
our 90 day test period. More encouragingly, many HTML
resources were recoverable for 851 days on average after
being deleted from the web server. Google proved to be the
most consistent at caching our synthetic web collections.
We have also used our web-repository crawler to reconstruct
a variety of actual websites with varying success.
HTML resources were the most numerous (52%) type of resource
in our collection of 24 websites and were the most successfully
recoverable resource type (89% recoverable). Images
were the second most numerous (40%) resource type,
but they were less successfully recovered (53%). Dynamic
pages and resources with path depths greater than three
were less likely to be recovered. Google was the most frequent
source for the reconstructions (44%), but MSN was a
close second (30%), followed by IA (19%) and Yahoo (7%).
The probability of reconstruction success was not correlated
with Google's PageRank or the size of the website.
REFERENCES
[1] H. Berghel. Responsible web caching. Communications
of the ACM, 45(9):1520, 2002.
[2] K. Bharat and A. Broder. Mirror, mirror on the web:
a study of host pairs with replicated content. In
Proceedings of WWW '99, pages 15791590, 1999.
[3] A. Z. Broder, S. C. Glassman, M. S. Manasse, and
G. Zweig. Syntactic clustering of the Web. Computer
Networks & ISDN Systems, 29(8-13):11571166, 1997.
[4] M. Burner. Crawling towards eternity: Building an
archive of the world wide web. Web Techniques
Magazine, 2(5), 1997.
[5] F. Can, R. Nuray, and A. B. Sevdik. Automatic
performance evaluation of web search engines. Info.
Processing & Management, 40(3):495514, 2004.
[6] J. Cho, N. Shivakumar, and H. Garcia-Molina.
Finding replicated web collections. In Proceedings of
SIGMOD '00, pages 355366, 2000.
[7] B. F. Cooper and H. Garcia-Molina. Infomonitor:
Unobtrusively archiving a World Wide Web server.
International Journal on Digital Libraries,
5(2):106119, April 2005.
[8] M. Day. Collecting and preserving the World Wide
Web. 2003. http:
//library.wellcome.ac.uk/assets/WTL039229.pdf.
[9] C. E. Dyreson, H. Lin, and Y. Wang. Managing
versions of web documents in a transaction-time web
server. In Proceedings of WWW '04, pages 422432,
2004.
[10] D. Fetterly, M. Manasse, and M. Najork. Spam, damn
spam, and statistics: using statistical analysis to
locate spam web pages. In Proceedings of WebDB '04,
pages 16, 2004.
[11] D. Fetterly, M. Manasse, M. Najork, and J. Wiener. A
large-scale study of the evolution of web pages. In
Proceedings of WWW '03, pages 669678, 2003.
[12] Google Sitemap Protocol, 2005. http://www.google.
com/webmasters/sitemaps/docs/en/protocol.html.
[13] Google webmaster help center: Webmaster guidelines,
2006. http://www.google.com/support/webmasters/
bin/answer.py?answer=35769.
[14] M. Gordon and P. Pathak. Finding information on the
World Wide Web: the retrieval effectiveness of search
engines. Inf. Process. Manage., 35(2):141180, 1999.
[15] A. Gulli and A. Signorini. The indexable web is more
than 11.5 billion pages. In Proceedings of WWW '05,
pages 902903, May 2005.
[16] Internet Archive FAQ: How can I get my site included
in the Archive?, 2006.
http://www.archive.org/about/faqs.php.
[17] D. Lewandowski, H. Wahlig, and G. Meyer-Beautor.
The freshness of Web search engine databases. Journal
of Information Science, 32(2):131148, Apr 2006.
[18] F. McCown, X. Liu, M. L. Nelson, and M. Zubair.
Search engine coverage of the OAI-PMH corpus. IEEE
Internet Computing, 10(2):6673, Mar/Apr 2006.
[19] F. McCown and M. L. Nelson. Evaluation of crawling
policies for a web-repository crawler. In Proceedings of
HYPERTEXT '06, pages 145156, 2006.
[20] F. McCown, J. A. Smith, M. L. Nelson, and J. Bollen.
Reconstructing websites for the lazy webmaster.
Technical report, Old Dominion University, 2005.
http://arxiv.org/abs/cs.IR/0512069.
[21] A. Ntoulas, J. Cho, and C. Olston. What's new on the
Web? The evolution of the Web from a search engine
perspective. In Proceedings of WWW '04, pages 112,
2004.
[22] J. S. Plank. A tutorial on Reed-Solomon coding for
fault-tolerance in RAID-like systems. Software:
Practice and Experience, 27(9):9951012, 1997.
[23] H. C. Rao, Y. Chen, and M. Chen. A proxy-based
personal web archiving service. SIGOPS Operating
Systems Review, 35(1):6172, 2001.
[24] V. Reich and D. S. Rosenthal. LOCKSS: A permanent
web publishing and access system. D-Lib Magazine,
7(6), 2001.
[25] A. Ross. Internet Archive forums: Web forum posting.
Oct 2004. http://www.archive.org/iathreads/
post-view.php?id=23121.
[26] J. A. Smith, F. McCown, and M. L. Nelson. Observed
web robot behavior on decaying web subsites. D-Lib
Magazine, 12(2), Feb 2006.
[27] M. Weideman and M. Mgidana. Website navigation
architectures and their effect on website visibility: a
literature survey. In Proceedings of SAICSIT '04,
pages 292296, 2004.
[28] J. Zhang and A. Dimitroff. The impact of webpage
content characteristics on webpage visibility in search
engine results (part I). Information Processing &
Management, 41(3):665690, 2005.
74
| Search engines (SEs);cached resources;web repositories;recovery;reconstruction;crawling;caching;lazy preservation;search engine;digital preservation |
123 | Learning Concepts from Large Scale Imbalanced Data Sets Using Support Cluster Machines | This paper considers the problem of using Support Vector Machines (SVMs) to learn concepts from large scale imbalanced data sets. The objective of this paper is twofold. Firstly, we investigate the effects of large scale and imbalance on SVMs. We highlight the role of linear non-separability in this problem. Secondly, we develop a both practical and theoretical guaranteed meta-algorithm to handle the trouble of scale and imbalance. The approach is named Support Cluster Machines (SCMs). It incorporates the informative and the representative under-sampling mechanisms to speedup the training procedure. The SCMs differs from the previous similar ideas in two ways, (a) the theoretical foundation has been provided, and (b) the clustering is performed in the feature space rather than in the input space. The theoretical analysis not only provides justification , but also guides the technical choices of the proposed approach. Finally, experiments on both the synthetic and the TRECVID data are carried out. The results support the previous analysis and show that the SCMs are efficient and effective while dealing with large scale imbalanced data sets. | INTRODUCTION
In the context of concept modelling, this paper considers
the problem of how to make full use of the large scale annotated
data sets. In particular, we study the behaviors of
Support Vector Machines (SVMs) on large scale imbalanced
data sets, not only because its solid theoretical foundations
but also for its empirical success in various applications.
1.1
Motivation
Bridging the semantic gap has been becoming the most
challenging problem of Multimedia Information Retrieval
(MIR). Currently, there are mainly two types of methods
to bridge the gap [8]. The first one is relevance feedback
which attempts to capture the user's precise needs through
iterative feedback and query refinement. Another promising
direction is concept modelling. As noted by Hauptmann
[14], this splits the semantic gap between low level features
and user information needs into two, hopefully smaller gaps:
(a) mapping the low-level features into the intermediate semantic
concepts and (b) mapping these concepts into user
needs. The automated image annotation methods for CBIR
and the high level feature extraction methods in CBVR are
all the efforts to model the first mapping. Of these methods,
supervised learning is one of the most successful ones. An
early difficulty of supervised learning is the lack of annotated
training data. Currently, however, it seems no longer
a problem. This is due to both the techniques developed to
leverage surrounding texts of web images and the large scale
collaborative annotation. Actually, there is an underway effort
named Large Scale Concept Ontology for Multimedia
Understanding (LSCOM), which intends to annotate 1000
concepts in broadcast news video [13]. The initial fruits of
this effort have been harvested in the practice of TRECVID
hosted by National Institute of Standards and Technology
(NIST) [1]. In TRECVID 2005, 39 concepts are annotated
by multiple participants through web collaboration, and ten
of them are used in the evaluation.
The available large amount of annotated data is undoubt-edly
beneficial to supervised learning. However, it also brings
out a novel challenge, that is, how to make full use of the
data while training the classifiers. On the one hand, the annotated
data sets are usually in rather large scale. The de-441
velopment set of TRECVID 2005 includes 74523 keyframes.
The data set of LSCOM with over 1000 annotated concepts
might be even larger. With all the data, the training of
SVMs will be rather slow. On the other hand, each concept
will be the minority class under one-against-all strategy
. Only a small portion of the data belong to the concept,
while all the others are not (In our case, the minority class
always refers to the positive class). The ratio of the positive
examples and the negative ones is typically below 1 : 100
in TRECVID data. These novel challenges have spurred
great interest in the communities of data mining and machine
learning[2, 6, 21, 22, 29]. Our first motivation is to
investigate the effects of large scale and imbalance on SVMs.
This is critical for correct technical choices and development.
The second objective of this paper is to provide a practical
as well as theoretical guaranteed approach to addressing the
problem.
1.2
Our Results
The major contribution of this paper can be summarized
as follows:
1. We investigate the effects of large scale and imbalance
on SVMs and highlight the role of linear non-separability
of the data sets. We find that SVMs has
no difficulties with linear separable large scale imbalanced
data.
2. We establish the relations between the SVMs trained
on the centroids of the clusters and the SVMs obtained
on the original data set. We show that the difference
between their optimal solutions are bounded by the
perturbation of the kernel matrix. We also prove the
optimal criteria for approximating the original optimal
solutions.
3. We develop a meta-algorithm named Support Cluster
Machines (SCMs).
A fast kernel k-means approach
has been employed to partition the data in the feature
space rather than in the input space.
Experiments on both the synthetic data and the TRECVID
data are carried out. The results support the previous analysis
and show that the SCMs are efficient and effective while
dealing with large scale imbalanced data sets.
1.3
Organization
The structure of this paper is as follows. In Section 2 we
give a brief review of SVMs and kernel k-means. We discuss
the effects of the large scale imbalanced data on SVMs
in Section 3. We develop the theoretical foundations and
present the detailed SCMs approach in Section 4. In Section
5 we carry out experiments on both the synthetic and
the TRECVID data sets. Finally, we conclude the paper in
Section 6.
PRELIMINARIES
Here, we present a sketch introduction to the soft-margin
SVMs for the convenience of the deduction in Section 4. For
a binary classification problem, given a training data set
D
of size n
D = {(x
i
, y
i
)
|x
i
R
N
, y
i
{1, -1}},
where x
i
indicates the training vector of the ith sample and
y
i
indicates its target value, and i = 1, . . . , n. The classification
hyperplane is defined as
w, (x) + b = 0,
where (
) is a mapping from R
N
to a (usually) higher dimension
Hilbert space
H, and , denotes the dot product
in
H. Thus, the decision function f(x) is
f (x) = sign( w, (x) + b).
The SVMs aims to find the hyperplane with the maximum
margin between the two classes, i.e., the optimal hyperplane.
This can be obtained by solving the following quadratic optimization
problem
min
w,b,
1
2 w
2
+ C
n
i=1
i
subject to
y
i
( w, (x
i
) + b)
1 i
(1)
i
0, i = 1, . . . , n.
With the help of Lagrange multipliers, the dual of the above
problem is
min
G() = 1
2
T
Q
- e
T
subject to
0
i
C, i = 1, . . . , n
(2)
T
y = 0,
where is a vector with components
i
that are the Lagrange
multipliers, C is the upper bound, e is a vector of
all ones, and Q is an n
n positive semi-definite matrix,
Q
ij
= y
i
y
j
(x
i
), (x
j
) . Since the mapping (
) only appears
in the dot product, therefore, we need not know its
explicit form. Instead, we define a kernel K(
, ) to calculate
the dot product, i.e., K(x
i
, x
j
) = (x
i
), (x
j
) . The matrix
K with components K(x
i
, x
j
) is named Gram Matrix
(or kernel matrix). With kernel K
, , we can implicitly
map the training data from input space to a feature space
H.
2.2
Kernel
k
-means and Graph Partitioning
Given a set of vectors x
1
, . . . , x
n
, the standard k-means
algorithm aims to find clusters
1
, . . . ,
k
that minimize the
objective function
J(
{
c
}
k
c=1
) =
k
c=1 x
i
c
x
i
- m
c 2
,
(3)
where
{
c
}
k
c=1
denotes the partitioning of the data set and
m
c
=
xic
x
i
|
c
|
is the centroid of the cluster
c
. Similar
to the idea of nonlinear SVMs, the k-means can also be
performed in the feature space with the help of a nonlinear
mapping (
), which results in the so-called kernel k-means
J(
{
c
}
k
c=1
) =
k
c=1 x
i
c
(x
i
)
- m
c 2
,
(4)
where m
c
=
xic
(x
i
)
|
c
|
. If we expand the Euclidean distance
(x
i
)
- m
c 2
in the objective function, we can find
that the image of x
i
only appears in the form of dot product
. Thus, given a kernel matrix K with the same meaning
442
in SVMs, we can compute the distance between points and
centroids without knowing explicit representation of (x
i
).
Recently, an appealing alternative, i.e., the graph clustering
has attracted great interest. It treats clustering as
a graph partition problem. Given a graph G = (
V, E, A),
which consists of a set of vertices
V and a set of edges E such
that an edge between two vertices represents their similarity.
The affinity matrix A is
|V||V| whose entries represent the
weights of the edges. Let links(
V
1
,
V
2
) be the sum of the
edge weights between the nodes in
V
1
and
V
2
, that is
links(
V
1
,
V
2
) =
iV
1
,jV
2
A
ij
.
Ratio association is a type of graph partitioning objective
which aims to maximize within-cluster association relative
to the size of the cluster
RAssoc(G) =
max
V
1
,...,V
k
k
c=1
links(
V
c
,
V
c
)
|V
c
|
.
(5)
The following theorem establishes the relation between kernel
k-means and graph clustering [10].
With this result,
we can develop some techniques to handle the difficulty of
storing the large kernel matrix for kernel k-means.
Theorem 1. Given a data set, we can construct a weighted
graph G = (
V, E, A), by treating each sample as a node and
linking an edge between each other. If we define the edge
weight A
ij
= K(x
i
, x
j
), that is, A = K, the minimization
of (4) is equivalent to the maximization of (5).
THE EFFECTS OF LARGE SCALE IM-BALANCED DATA ON SVMS
There are two obstacles yielded by large scale. The first
one is the kernel evaluation, which has been intensively discussed
in the previous work. The computational cost scales
quadratically with the data size. Furthermore, it is impossible
to store the whole kernel matrix in the memory for
common computers. The decomposition algorithms (e.g.,
SMO) have been developed to solve the problem [20, 22].
The SMO-like algorithms actually transform the space load
to the time cost, i.e., numerous iterations until convergence.
To reduce or avoid the kernel reevaluations, various efficient
caching techniques are also proposed [16]. Another obstacle
caused by large scale is the increased classification difficulty
, that is, the more probable data overlapping. We can
not prove it is inevitable but it typically happens. Assume
we will draw n randomly chosen numbers between 1 to 100
from a uniform distribution, our chances of drawing a number
close to 100 would improve with increasing values of n,
even though the expected mean of the draws is invariant [2].
The checkerboard experiment in [29] is an intuitive example
. This is true especially for the real world data, either
because of the weak features (we mean features that are
less discriminative) or because of the noises. With the large
scale data, the samples in the overlapping area might be
so many that the samples violating KKT conditions become
abundant. This means the SMO algorithm might need more
iterations to converge.
Generally, the existing algorithmic approaches have not
been able to tackle the very large data set. Whereas, the
under-sampling method, e.g., active learning, is possible.
With unlabelled data, active learning selects a well-chosen
subset of data to label so that reduce the labor of manual annotations
[24]. With large scale labelled data, active learning
can also be used to reduce the scale of training data [21].
The key issue of active learning is how to choose the most
"valuable" samples. The informative sampling is a popular
criterion. That is, the samples closest to the boundary or
maximally violating the KKT conditions (the misclassified
samples) are preferred [24, 26]. Active learning is usually
in an iterative style. It requires an initial (usually random
selected) data set to obtain the estimation of the boundary.
The samples selected in the following iterations depend on
this initial boundary. In addition, active learning can not
work like the decomposition approach which stops until all
the samples satisfy the KKT conditions. This imply a potential
danger, that is, if the initial data are selected improperly,
the algorithm might not be able to find the suitable hyperplane
. Thus, another criterion, i.e., representative, must be
considered. Here, "representative" refers to the ability to
characterize the data distribution. Nguyen et al. [19] show
that the active learning method considering the representative
criterion will achieve better results. Specifically for
SVMs, pre-clustering is proposed to estimate the data distribution
before the under-sampling [31, 3, 30]. Similar ideas
of representative sampling appear in [5, 12].
3.2
The Imbalanced Data
The reason why general machine learning systems suffer
performance loss with imbalanced data is not yet clear [23,
28], but the analysis on SVMs seems relatively straightforward
. Akbani et al. have summarized three possible causes
for SVMs [2].
They are, (a) positive samples lie further
from the ideal boundary, (b) the weakness of the soft-margin
SVMs, and (c) the imbalanced support vector ratio. Of these
causes, in our opinion, what really matters is the second one.
The first cause is pointed out by Wu et al. [29]. This situation
occurs when the data are linearly separable and the
imbalance is caused by the insufficient sampling of the minority
class. Only in this case does the "ideal" boundary
make sense.
As for the third cause, Akbani et al.
have
pointed out that it plays a minor role because of the constraint
T
y = 0 on Lagrange multipliers [2].
The second cause states that the soft-margin SVMs has inherent
weakness for handling imbalanced data. We find that
it depends on the linear separability of the data whether the
imbalance has negative effects on SVMs. For linearly separable
data, the imbalance will have tiny effects on SVMs,
since all the slack variables of (1) tend to be zeros (, unless
the C is so small that the maximization of the margin dominates
the objective). In the result, there is no contradiction
between the capacity of the SVMs and the empirical error
. Unfortunately, linear non-separable data often occurs.
The SVMs has to achieve a tradeoff between maximizing
the margin and minimizing the empirical error. For imbalanced
data, the majority class outnumbers the minority one
in the overlapping area. To reduce the overwhelming errors
of misclassifying the majority class, the optimal hyperplane
will inevitably be skew to the minority. In the extreme, if C
is not very large, SVMs simply learns to classify everything
as negative because that makes the "margin" the largest,
with zero cumulative error on the abundant negative examples
. The only tradeoff is the small amount of cumulative
443
error on the few positive examples, which does not count for
much.
Several variants of SVMs have been adopted to solve the
problem of imbalance. One choice is the so-called one-class
SVMs, which uses only positive examples for training. Without
using the information of the negative samples, it is usually
difficult to achieve as good result as that of binary SVMs
classifier [18]. Using different penalty constants C
+
and C
for
the positive and negative examples have been reported to
be effective [27, 17]. However, Wu et al. point out that the
effectiveness of this method is limited [29]. The explanation
of Wu is based on the KKT condition
T
y = 0, which imposes
an equal total influence from the positive and negative
support vectors. We evaluate this method and the result
shows that tuning
C
+
C
does
work (details refer to Section
5). We find this also depends on the linear separability of
the data whether this method works. For linearly separable
data, tuning
C
+
C
has
little effects, since the penalty constants
are useless with the zero-valued slack variables. However, if
the data are linearly non-separable, tuning
C
+
C
does
change
the position of separating hyperplane. The method to modify
the kernel matrix is also proposed to improve SVMs for
imbalanced data [29]. A possible drawback of this type approach
is its high computational costs.
OVERALL APPROACH
The proposed approach is named Support Cluster Machines
(SCMs). We first partition the negative samples into
disjoint clusters, then train an initial SVMs model using
the positive samples and the representatives of the negative
clusters. With the global picture of the initial SVMs, we can
approximately identify the support vectors and non-support
vectors. A shrinking technique is then used to remove the
samples which are most probably not support vectors. This
procedure of clustering and shrinking are performed itera-tively
several times until some stop criteria satisfied. With
such a from coarse-to-fine procedure, the representative and
informative mechanisms are incorporated. There are four
key issues in the meta-algorithm of SCMs: (a) How to get
the partition of the training data, (b) How to get the representative
for each cluster, (c) How to safely remove the
non-support vector samples, (d) When to stop the iteration
procedure. Though similar ideas have been proposed
to speed-up SVMs in [30, 3, 31], no theoretical analysis of
this idea has been provided. In the following, we present an
in-depth analysis for this type of approaches and attempt to
improve the algorithm under the theoretical guide.
4.1
Theoretical Analysis
Suppose
{
c
}
k
c=1
is a partition of the training set that the
samples within the same cluster have the same class label.
If we construct a representative u
c
for each cluster
c
, we
can obtain two novel models of SVMs.
The first one is named Support Cluster Machines (SCMs).
It treats each representative as a sample, thus the data size
is reduced from n to k. This equals to the classification of
the clusters. That is where the name SCMs come from. The
new training set is
D
=
{(u
c
, y
c
)
|u
c
R
N
, y
c
{1, -1}, c = 1, . . . , k},
in which y
c
equals the labels of the samples within
c
. We
define the dual problem of support cluster machines as
min
G
(
) = 1
2
T
Q
- e
T
subjectto
0
i
|
i
|C, i = 1, . . . , k
(6)
T
y
= 0,
where
is a vector of size k with components
i
corresponding
to u
i
,
|
i
|C is the upper bound for
i
, e
is a
k dimension vector of all ones, and Q
is an k
k positive
semi-definite matrix, Q
ij
= y
i
y
j
(u
i
), (u
j
) .
Another one is named Duplicate Support Vector Machines
(DSVMs). Different from SCMs, it does not reduce the size
of training set. Instead, it replace each sample x
i
with the
representative of the cluster that x
i
belongs to. Thus, the
samples within the same cluster are duplicate. That is why
it is named DSVMs. The training set is
~
D = {(~x
i
, ~
y
i
)
|x
i
D, if x
i
c
, ~
x
i
= u
c
and ~
y
i
= y
i
},
and the corresponding dual problem is defined as
min
~
G() = 1
2
T
~
Q
- e
T
subjectto
0
i
C, i = 1, . . . , n
(7)
T
y = 0,
where ~
Q is is an n
n positive semi-definite matrix, ~
Q
ij
=
~
y
i
~
y
j
(~
x
i
), (~
x
j
) .
We have the following theorem that
states (6) is somehow equivalent to (7):
Theorem 2. With the above definitions of the SCMs and
the DSVMs, if
and
are their optimal solutions respectively
, the relation G
(
) = ~
G(
) holds. Furthermore,
any
R
k
satisfying
{
c
=
x
i
c
i
,
c = 1, . . . , k}
is the optimal solution of SCMs. Inversely, any
R
n
satisfying
{
x
i
c
i
=
c
,
c = 1, . . . , k} and the constraints
of (7) is the optimal solution of DSVMs.
The proof is in Appendix A. Theorem 2 shows that solving
the SCMs is equivalent to solving a quadratic programming
problem of the same scale as that of the SVMs in (2).
Comparing (2) and (7), we can find that only the Hessian
matrix is different. Thus, to estimate the approximation
from SCMs of (6) to SVMs of (2), we only need to analyze
the stability of the quadratic programming model in
(2) when the Hessian matrix varies from Q to ~
Q. Daniel
has presented a study on the stability of the solution of definite
quadratic programming, which requires that both Q
and ~
Q are positive definite [7]. However, in our situation,
Q is usually positive definite and ~
Q is not (because of the
duplications). We develop a novel theorem for this case.
If define = Q
- ~
Q , where
denotes the Frobenius
norm of a matrix, the value of measure the size of the
perturbations between Q and ~
Q.
We have the following
theorem:
Theorem 3. If Q is positive definite and = Q - ~
Q
,
let
and ~
be the optimal solutions to (2) and (7) respectively
, we have
~
~
mC
G( ~
)
- G(
)
(m
2
+ ~
m
2
)C
2
2
444
where is the minimum eigenvalue of Q, m and ~
m indicate
the numbers of the support vectors for (2) and (7) respectively
.
The proof is in Appendix B. This theorem shows that the
approximation from (2) to (7) is bounded by . Note that
this does not mean that with minimal we are sure to get
the best approximate solution. For example, adopting the
support vectors of (1) to construct ~
Q will yield the exact
optimal solution of (2) but the corresponding are not necessarily
minimum. However, we do not know which samples
are support vectors beforehand. What we can do is to minimize
the potential maximal distortion between the solutions
between (2) and (7).
Now we consider the next problem, that is, given the partition
{
c
}
k
c=1
, what are the best representatives
{u
c
}
k
c=1
for the clusters in the sense of approximating Q? In fact,
we have the following theorem:
Theorem 4. Given the partition {
c
}
k
c=1
, the
{u
c
}
k
c=1
satisfying
(u
c
) =
x
i
c
(x
i
)
|
c
|
, c = 1, . . . , k
(8)
will make = Q
- ~
Q
minimum.
The proof is in Appendix C. This theorem shows that, given
the partition, (u
c
) = m
c
yields the best approximation
between ~
Q and Q.
Here we come to the last question, i.e., what partition
{
c
}
k
c=1
will make =
Q
- ~
Q
minimum. To make the
problem more clearly, we expand
2
as
Q
- ~
Q
2
=
k
h=1
k
l=1 x
i
h
x
j
l
( (x
i
), (x
j
)
- m
h
, m
l
)
2
.
(9)
There are approximately k
n
/k! types of such partitions of
the data set. An exhaustive search for the best partition is
impossible. Recalling that (9) is similar to (4), we have the
following theorem which states their relaxed equivalence.
Theorem 5. The relaxed optimal solution of minimizing
(9) and the relaxed optimal solution of minimizing (4) are
equivalent.
The proof can be found in Appendix D. Minimizing amounts
to find a low-rank matrix approximating Q. Ding et al. have
pointed out the relaxed equivalence between kernel PCA and
kernel k-means in [11]. Note that minimizing (9) is different
from kernel PCA in that it is with an additional block-wise
constant constraint. That is, the value of ~
Q
ij
must be invariant
with respect to the cluster
h
containing ~
x
i
and the
cluster
l
containing ~
x
j
. With Theorem 5 we know that
kernel k-means is a suitable method to obtain the partition
of data.
According to the above results, the SCMs essentially finds
an approximate solution to the original SVMs by smoothing
the kernel matrix K (or Hessian matrix Q). Fig.1 illustrates
the procedure of smoothing the kernel matrix via clustering.
Hence, by solving a smaller quadratic programming problem
, the position of separating hyperplane can be roughly
determined.
-5
0
5
10
15
-4
-2
0
2
4
6
8
10
12
14
16
50
100
150
200
20
40
60
80
100
120
140
160
180
200
50
100
150
200
20
40
60
80
100
120
140
160
180
200
50
100
150
200
20
40
60
80
100
120
140
160
180
200
(a)
(b)
(c )
(d)
Figure 1: (a) 2D data distribution, (b) the visualization
of the kernel matrix Q, (c) the kernel matrix
Q by re-ordering the entries so that the samples belonging
to the same cluster come together, (d) the
approximate kernel matrix ~
Q obtained by replacing
each sample with the corresponding centroid.
4.2
Kernel-based Graph Clustering
In the previous work, k-means [30], BIRCH [31] and PDDP
[3] have been used to obtain the partition of the data. None
of them performs clustering in the feature space, though the
SVMs works in the feature space. This is somewhat unnatural
. Firstly, recalling that the kernel K(
, ) usually implies
an implicitly nonlinear mapping from the input space to the
feature space, the optimal partition of input space is not necessarily
the optimal one of feature space. Take k-means as
an example, due to the fact that the squared Euclidean distance
is used as the distortion measure, the clusters must be
separated by piece-wise hyperplanes (i.e., voronoi diagram).
However, these separating hyperplanes are no longer hyperplanes
in the feature space with nonlinear mapping (
).
Secondly, the k-means approach can not capture the complex
structure of data. As shown in Fig.2, the negative class
is in a ring-shape in the input space. If the k-means is used,
the centroids of positive and negative class might overlap.
Whereas in the feature space, the kernel k-means might get
separable centroids.
Several factors limit the application of kernel k-means to
large scale data. Firstly, it is almost impossible to store the
whole kernel matrix K in the memory, e.g., for n = 100 000,
we still need 20 gigabytes memory taking the symmetry into
account. Secondly, the kernel k-means relies heavily on an
effective initialization to achieve good results, and we do not
have such a sound method yet. Finally, the computational
cost of the kernel k-means might exceeds that of SVMs, and
therefore, we lose the benefits of under-sampling. Dhillon et
al. recently propose a multilevel kernel k-means method [9],
which seems to cater to our requirements. The approach
is based on the equivalence between graph clustering and
kernel k-means. It incorporates the coarsening and initial
partitioning phases to obtain a good initial clustering. Most
importantly, the approach is extremely efficient. It can handle
a graph with 28,294 nodes and 1,007,284 edges in several
seconds. Therefore, here we adopt this approach. The detailed
description can be found in [9]. In the following, we
focus on how to address the difficulty of storing large scale
kernel matrix.
Theorem 1 states that kernel k-means is equivalent to a
type of graph clustering. Kernel k-means focuses on grouping
data so that their average distance from the centroid is
minimum ,while graph clustering aims to minimizing the average
pair-wise distance among the data. Central grouping
and pair-wise grouping are two different views of the same
approach. From the perspective of pair-wise grouping, we
can expect that two samples with large distance will not
belong to the same cluster in the optimal solution. Thus,
445
1
x
2
x
1
z
2
z
3
z
Positive Class
Negative Class
Figure 2: The left and right figures show the data
distribution of input space and feature space respectively
. The two classes are indicated by squares and
circles. Each class is grouped into one cluster, and
the solid mark indicates the centroid of the class.
we add the constraint that two samples with distance large
enough are not linked by an edge, that is, transforming the
dense graph to a sparse graph. This procedure is the common
practice in spectral clustering or manifold embedding.
Usually, two methods have been widely used for this purpose
, i.e., k-nearest neighbor and -ball. Here, we adopt the
-ball approach. Concretely, the edges with weight A
ij
<
is removed from the original graph, in which the parameter
is pre-determined. By transforming a dense graph into
a sparse graph, we only need store the sparse affinity matrix
instead of the original kernel matrix. Nevertheless, we
have to point out that the time complexity of constructing
sparse graph is O(n
2
) for data set with n examples, which
is the efficiency bottleneck of the current implementation.
With the sparse graph, each iteration of the multilevel kernel
k-means costs O(ln
) time, where ln
is
the number of
nonzero entries in the kernel matrix.
4.3
Support Cluster Machines
According to Theorem 4, choosing the centroid of each
cluster as representative will yield the best approximation.
However, the explicit form of (
) is unknown. We don't
know the exact pre-images of
{m
c
}
k
c=1
, what we can get are
the dot products between the centroids by
m
h
, m
l
=
1
|
h
||
l
|
x
i
h
x
j
l
(x
i
), (x
j
) ,
which requires O(n
2
) costs. Then the pre-computed kernel
SVMs can be used. The pre-computed kernel SVMs takes
the kernel matrix K
as input, and save the indices of support
vectors in the model [15].
To classify the incoming
sample x, we have to calculate the dot product between x
and all the samples in the support clusters, e.g.,
c
(If m
c
is
a support vector, we define the cluster
c
as support cluster.)
x, m
c
=
1
|
c
|
x
i
c
x, x
i
.
We need another O(nm) costs to predict all the training
samples if there are m samples in support clusters. This
is unacceptable for large scale data. To reduce the kernel
reevaluation, we adopt the same method as [3], i.e., selecting
a pseudo-center for each cluster as the representative
u
c
= arg min
x
i
c
(x
i
)
- 1
|
c
|
x
j
c
(x
j
)
2
,
1
x
Positive class
Negative class
2
x
1
x
Positive class
Negative class
2
x
(a)
(b)
Figure 3: (a) Each class is grouped into one cluster,
(b) each class is grouped into two clusters. The solid
mark represents the centroid of the corresponding
class.
The solid lines indicate the support hyperplanes
yielded by SCMs and the dot lines indicate
the true support hyperplanes.
which can be directly obtained by
u
c
= arg max
x
i
c
x
j
c
(x
i
), (x
j
) .
(10)
Thus, the kernel evaluation within training procedure requires
O(
k
c=1
|
c
|
2
+ k
2
) time, which be further reduced
by probabilistic speedup proposed by Smola [25]. The kernel
evaluation of predicting the training samples is reduced
from O(nm) to O(ns), where s indicates the number of support
clusters.
4.4
Shrinking Techniques
With the initial SCMs, we can remove the samples that
are not likely support vectors. However, there is no theoretical
guarantee for the security of the shrinking. In Fig. 3,
we give a simple example to show that the shrinking might
not be safe. In the example, if the samples outside the margin
between support hyperplanes are to be removed, the
case (a) will remove the true support vectors while the case
(b) will not. The example shows that the security depends
on whether the hyperplane of SCMs is parallel to the true
separating hyperplane. However, we do not know the direction
of true separating hyperplane before the classification.
Therefore, what we can do is to adopt sufficient initial cluster
numbers so that the solution of SCMs can approximate
the original optimal solution enough. Specifically for large
scale imbalanced data, the samples satisfying the following
condition will be removed from the training set:
| w, (x) + b| > ,
(11)
where is a predefined parameter.
4.5
The Algorithm
Yu [31] and Boley [3] have adopted different stop criteria
. In Yu et al.'s approach, the algorithm stops when each
cluster has only one sample. Whereas, Boley et al. limit
the maximum iterations by a fixed parameter.
Here, we
propose two novel criteria especially suitable for imbalanced
data. The first one is to stop whenever the ratio of positive
and negative samples is relatively imbalanced. Another
choice is the Neyman-Pearson criterion, that is, minimizing
the total error rate subject to a constraint that the miss rate
of positive class is less than some threshold. Thus, once the
446
miss rate of positive class exceeds some threshold, we stop
the algorithm.
The overall approach is illustrated in Algorithm 1. With
large scale balanced data, we carry out the data clustering
for both classes separately. Whereas with imbalanced data,
the clustering and shrinking will only be conducted on the
majority class. The computation complexity is dominated
by kernel evaluation. Therefore, it will not exceed O((n
)
2
+
(n
+
)
2
), where n
and
n
+
indicate the number of negative
and positive examples respectively.
Algorithm 1: Support Cluster Machines
Input
: Training data set
D = D
+
D
Output
: Decision function f
repeat
1
{
+
c
, m
+
c
}
k
+
c=1
=KernelKMeans(
D
+
)
2
{
c
, m
c
}
k
c=1
=KernelKMeans(
D
)
3
D
=
{m
+
c
}
k
+
c=1
{m
+
c
}
k
c=1
4
f
=SVMTrain(
D
)
5
f (
D) =SVMPredict(f
,
D)
6
D = D
+
D
=Shrinking
(f (
D));
7
until
stop criterion is true
8
EXPERIMENTS
The experiments on both the synthetic and the TRECVID
data are carried out. The experiments on synthetic data
are used to analyze the effects of large scale and imbalance
on SVMs and the experiments on TRECVID data serve to
evaluate the effectiveness and efficiency of SCMs. The multilevel
kernel graph partitioning code graclus [9] is adopted for
data clustering and the well-known LibSVM software [15] is
used in our experiments. All our experiments are done in a
Pentium 4 3.00GHz machine with 1G memory.
5.1
Synthetic Data Set
We generate two-dimensional data for the convenience
of observation. Let x is a random variable uniformly distributed
in [0, ]. The data are generated by
D
+
=
{(x, y)|y =sin(x)-+0.7[rand(0, 1)-1], x [0, ]}
D
=
{(x, y)|y =- sin(x)+1+0.7rand(0, 1), x [0, ]},
where rand(0, 1) generates the random numbers uniformly
distributed between 0 and 1, and is a parameter controlling
the overlapping ratio of the two classes. Fig. 4 and
Fig. 5 show some examples of the synthetic data. We use
the linear kernel function in all the experiments on synthetic
data.
5.1.1
The Effects of Scale
We generate two types of balanced data, i.e., n
+
= n
,
but one (
D
1
=
D( = 1.5)) is linearly separable and the
Table 1: The effects of scale and overlapping on the
time costs of training SVMs (in seconds).
n
+
+ n
200
2000 4000 8000 20000 40000
80000
time(
D
1
) 0.01 0.03
0.04
0.07
0.23
0.63
1.32
time(
D
2
) 0.02 0.70
3.24 14.01 58.51 201.07 840.60
0
0.5
1
1.5
2
2.5
3
3.5
-2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Positive class
Negative class
0
0.5
1
1.5
2
2.5
3
3.5
-1.5
-1
-0.5
0
0.5
1
1.5
2
Positive class
Negative class
(a)
(b)
Figure 4: (a) example of non-overlapped balanced
data sets, (b) example of overlapped balanced data
sets.
0
0.5
1
1.5
2
2.5
3
3.5
-2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
Positive class
Negative class
0
0.5
1
1.5
2
2.5
3
3.5
-1.5
-1
-0.5
0
0.5
1
1.5
2
Positive class
Negative class
(a)
(b)
Figure 5: (a) example of non-overlapped imbalanced
data sets, (b) example of overlapped imbalanced
data sets.
other (
D
2
=
D( = 0.6)) is not, as shown in Fig.4. We
observe the difference of the behaviors of time costs for
D
1
and
D
2
when the scale increases. With the same parameter
settings, the time costs of optimizing the objective for
D
1
and
D
2
are shown in Table 1, from which we can get two
conclusions, (a) time costs increase with the scale, and (b)
in the same scale, the linearly non-separable data will cost
more time to converge.
5.1.2
The Effects of Imbalance
We generate two types of imbalanced data, i.e., n
+
n
,
but one (
D
1
=
D( = 1.5)) is linearly separable and the
other (
D
2
=
D( = 0.6)) is not, as shown in Fig.5. We
observe the difference of the effects of imbalance for linearly
separable data
D
1
and linearly non-separable
D
2
. For the
space limitation, we will not describe the detailed results
here but only present the major conclusions. For linearly
separable data, SVMs can find the non-skew hyperplane if
C is not too small. In this situation, tuning
C
+
C
is
meaningless
. For linearly non-separable data, the boundary will be
skew to positive class if C
+
= C
. In this case, increasing
C
+
C
dose
"push" the skewed separating hyperplane to the
negative class. For both
D
1
and
D
2
, if the C is too small,
underfitting occurs, that is, the SVMs simply classify all the
samples into negative class.
5.2
TRECVID Data Set
5.2.1
Experimental Setup
In this section, we evaluate the proposed approach on the
high level feature extraction task of TRECVID [1]. Four
concepts, including "car","maps","sports" and "waterscape",
are chosen to model from the data sets. The development
data of TRECVID 2005 are employed and divided into training
set and validation set in equal size. The detailed statis-447
Table 2: The details of the training set and validation
set of TRECVID 2005.
Concept
|D
train
|
|D
val
|
Positive Negative Positive Negative
Car
1097
28881
1097
28881
Maps
296
30462
296
30463
Sports
790
29541
791
29542
Waterscape
293
30153
293
30154
tics of the data is summarized in Table 2. In our experiments
, the global 64-dimension color autocorrelogram feature
is used to represent the visual content of each image.
Conforming to the convention of TRECVID, average precision
(AP) is chosen as the evaluation criterion. Totally five
algorithms have been implemented for comparison:
Whole
All the negative examples are used
Random
Random sampling of the negative examples
Active
Active sampling of the negative examples
SCMs I
SCMs with k-means in the input space
SCMs
SCMs with kernel k-means
In the Active method, we firstly randomly select a subset
of negative examples.
With this initial set, we train
an SVMs model and use this model to classify the whole
training data set. Then the maximally misclassified negative
examples are added to the training set. This procedure
iterates until the ratio between the negative and the
positive examples exceeding five. Since both the Random
and Active methods depend on the initial random chosen
data set, we repeat each of them for ten times and calculate
their average performances for comparison. Both SCMs I
and SCMs methods adopt the Gaussian kernel during the
SVMs classification. The only difference is that SCMs I
performs data clustering with k-means in the input space
while SCMs with k-means in the feature space.
5.2.2
Parameter Settings
Currently, the experiments focus on the comparative performance
between the different approaches based on the the
same parameter settings. Therefore, some of the parameters
are heuristically determined and might not be optimal. The
current implementation of SCMs involves the following parameter
settings: (a) Gaussian kernel is adopted and the parameters
are selected via cross-validation, furthermore, the
kernel function of kernel k-means clustering is adopted the
same as that of SVMs, (b) the threshold for transforming
dense graphs to sparse ones is experimentally determined
as
= 0.6, (c) the parameter of shrinking technique is experimentally
chosen as = 1.3, (d) for SCMs, the data are
imbalanced for each concept, we only carry out data clustering
for negative classes, therefore, k
+
always equals
|D
+
|
and k
is
always chosen as
|D
|
10
, (e) we stop the iteration
of SCMs when the number of the negative examples are not
more than the five times of that of the positive examples.
5.2.3
Experiment Results
The average performance and time costs of the various
approaches are in Table 3 and Table 4 respectively.
We
can see that both the Random and Active methods use
fewer time than the others, but their performances are not
as good as the others. Furthermore, the SCMs achieves
Table 3: The average performance of the approaches
on the chosen concepts, measured by average precision
.
Concept
Whole Random Active SCMs I SCMs
Car
0.196
0.127
0.150
0.161
0.192
Maps
0.363
0.274
0.311
0.305
0.353
Sports
0.281
0.216
0.253
0.260
0.283
Waterscape
0.269
0.143
0.232
0.241
0.261
Table 4: The average time costs of the approaches
on the chosen concepts (in seconds).
Concept
Whole Random Active SCMs I SCMs
Car
4000.2
431.0
1324.6
1832.0
2103.4
Maps
402.6
35.2
164.8
234.3
308.5
Sports
1384.5
125.4
523.8
732.5
812.7
Waterscape
932.4
80.1
400.3
504.0
621.3
the comparable performance with that of Whole while uses
fewer time costs. Note that SCMs I also achieves satisfying
results. This might be due to the Gaussian kernels, in which
e
- x-y
2
is monotonic with x
-y
2
. Therefore, the order of
the pair-wise distances is the same for both the input space
and feature space, which perhaps leads to similar clustering
results.
CONCLUSIONS
In this paper, we have investigated the effects of scale and
imbalance on SVMs. We highlight the role of data overlapping
in this problem and find that SVMs has no difficulties
with linear separable large scale imbalanced data.
We propose a meta-algorithm named Support Cluster Machines
(SCMs) for effectively learning from large scale and
imbalanced data sets.
Different from the previous work,
we develop the theoretical justifications for the idea and
choose the technical component guided by the theoretical
results. Finally, experiments on both the synthetic and the
TRECVID data are carried out. The results support the
previous analysis and show that the SCMs are efficient and
effective while dealing with large scale imbalanced data sets.
However, as a pilot study, there is still some room for improvement
. Firstly, we have not incorporated the caching
techniques to avoid the kernel reevaluations. Therefore, we
have to recalculate the dot product on line whenever it is re-quired
. Secondly, the parameters within the algorithms are
currently selected heuristically, which depend on the tradeoff
of efficiency and accuracy.
ACKNOWLEDGMENTS
We would like to thank the anonymous reviewers for their
insightful suggestions. We also thank Dr. Chih-Jen Lin for
the code of libSVM, Brian J. Kulis for the code of graclus
and National Institute of Standards and Technology for providing
the TRECVID data sets. Finally, special thanks go
to Dr. Ya-xiang Yuan for his helpful discussions on optimization
theory.
REFERENCES
[1] TREC Video Retrieval. National Institute of
Standards and Technology,
http://www-nlpir.nist.gov/projects/trecvid/.
448
[2] R. Akbani, S. Kwek, and N. Japkowicz. Applying
Support Vector Machines to Imbalanced Datasets. In
Proceedings of ECML'04, pages 3950, 2004.
[3] D. Boley and D. Cao. Training Support Vector
Machine using Adaptive Clustering. In Proceeding of
2004 SIAM International Conference on Data Mining,
April 2004.
[4] S. Boyd and L. Vandenberghe. Convex Optimization.
Cambridge University Press, New York, NY, USA,
2004.
[5] K. Brinker. Incorporating Diversity in Active Learning
with Support Vector Machines. In Proceedings of
ICML'03, pages 5966, 2003.
[6] N. V. Chawla, N. Japkowicz, and A. Kotcz. Editorial:
Special Issue on Learning from Imbalanced Data Sets.
SIGKDD Explor. Newsl., 6(1):16, 2004.
[7] J. W. Daniel. Stability of the Solution of Definite
Quadratic Programs. Mathematical Programming,
5(1):4153, December 1973.
[8] R. Datta, J. Li, and J. Z. Wang. Content-based Image
Retrieval: Approaches and Trends of the New Age. In
Proceedings of ACM SIGMM workshop on MIR'05,
pages 253262, 2005.
[9] I. Dhillon, Y. Guan, and B. Kulis. A Fast
Kernel-based Multilevel Algorithm for Graph
Clustering. In Proceeding of ACM SIGKDD'05, pages
629634, 2005.
[10] I. S. Dhillon, Y. Guan, and B. Kulis. A Unified View
of Graph Partitioning and Weighted Kernel k-means.
Technical Report TR-04-25, The University of Texas
at Austin, Department of Computer Sciences, June
2004.
[11] C. Ding and X. He. K-means clustering via principal
component analysis. In Proceedings of ICML'04, pages
2936, 2004.
[12] K.-S. Goh, E. Y. Chang, and W.-C. Lai. Multimodal
Concept-dependent Active Learning for Image
Retrieval. In Proceedings of ACM MM'04, pages
564571, 2004.
[13] A. G. Hauptmann. Towards a Large Scale Concept
Ontology for Broadcast Video. In Proceedings of
CIVR'04, pages 674675, 2004.
[14] A. G. Hauptmann. Lessons for the Future from a
Decade of Informedia Video Analysis Research. In
Proceedings of CIVR'05, pages 110, 2005.
[15] C.-W. Hsu, C.-C. Chang, and C.-J. Lin. A Practical
Guide to Support Vector Classification. 2005. available
at http://www.csie.ntu.edu.tw/~cjlin/libsvm/.
[16] T. Joachims. Making Large-scale Support Vector
Machine Learning Practical. Advances in kernel
methods: support vector learning, pages 169184, 1999.
[17] Y. Lin, Y. Lee, and G. Wahba. Support Vector
Machines for Classification in Nonstandard Situations.
Machine Learning, 46(1-3):191202, 2002.
[18] L. M. Manevitz and M. Yousef. One-class SVMs for
Document Classification. Journal of Machine Learning
Research, 2:139154, 2002.
[19] H. T. Nguyen and A. Smeulders. Active Learning
Using Pre-clustering. In Proceedings of ICML'04,
pages 7986, 2004.
[20] E. Osuna, R. Freund, and F. Girosi. An Improved
Training Algorithm for Support Vector Machines. In
IEEE Workshop on Neural Networks and Signal
Processing, September 1997.
[21] D. Pavlov, J. Mao, and B. Dom. Scaling-Up Support
Vector Machines Using Boosting Algorithm. In
Proceeding of ICPR'00, volume 2, pages 22192222,
2000.
[22] J. C. Platt. Fast Training of Support Vector Machines
using Sequential Minimal Optimization. Advances in
kernel methods: support vector learning, pages
185208, 1999.
[23] R. C. Prati, G. E. A. P. A. Batista, and M. C.
Monard. Class Imbalances versus Class Overlapping:
an Analysis of a Learning System Behavior. In
Proceedings of the MICAI 2004, pages 312321, 2004.
[24] G. Schohn and D. Cohn. Less is More: Active
Learning with Support Vector Machines. In
Proceddings of ICML'00, pages 839846, 2000.
[25] A. J. Smola and B. Sch
okopf. Sparse Greedy Matrix
Approximation for Machine Learning. In Proceedings
of ICML'00, pages 911918, 2000.
[26] S. Tong and E. Chang. Support Vector Machine
Active Learning for Image Retrieval. In Proceedings of
ACM MM'01, pages 107118, 2001.
[27] K. Veropoulos, N. Cristianini, and C. Campbell.
Controlling the Sensitivity of Support Vector
Machines. In Proceedings of IJCAI'99, 1999.
[28] G. M. Weiss and F. J. Provost. Learning When
Training Data are Costly: The Effect of Class
Distribution on Tree Induction. Journal of Artificial
Intelligence Research (JAIR), 19:315354, 2003.
[29] G. Wu and E. Y. Chang. KBA: Kernel Boundary
Alignment Considering Imbalanced Data Distribution.
IEEE Transactions on Knowledge and Data
Engineering, 17(6):786795, 2005.
[30] Z. Xu, K. Yu, V. Tresp, X. Xu, and J. Wang.
Representative Sampling for Text Classification Using
Support Vector Machines. In Proceedings of ECIR'03,
pages 393407, 2003.
[31] H. Yu, J. Yang, J. Han, and X. Li. Making SVMs
Scalable to Large Data Sets using Hierarchical Cluster
Indexing. Data Min. Knowl. Discov., 11(3):295321,
2005.
APPENDIX
A.
PROOF OF THEOREM 2
Firstly, we define ^
which satisfies ^
c
=
x
i
c
i
,
c =
1, . . . , k. It is easy to verify that ^
is a feasible solution of
SCMs. Secondly, we define
satisfying
i
=
c
|
c
|
if x
i
c
, i = 1, . . . , n. It is easy to verify that
is a feasible solution
of DSVMs. According to the relation of
D
and ~
D, we
can obtain the following equation
1
2
n
i=1
n
j=1
i
y
i
(~
x
i
), (~
x
j
)
j
y
j
n
i=1
i
=
1
2
k
h=1
k
l=1
^
h
y
h
(u
h
), (u
l
) ^
l
y
l
k
h=1
^
h
,
which means ~
G(
) = G
( ^
). Similarly, we can get ~
G(
) =
G
(
). Using the fact that
and
are the optimal solu-449
tions to SCMs and DSVMs respectively, we have G
(
)
G
( ^
) and ~
G(
)
~
G(
). Thus, the equation G
(
) =
~
G(
) holds. For any
R
k
satisfying
{
c
=
x
i
c
i
,
c = 1, . . . , k}, we know it is a feasible solution to SCMs
and G
(
) = ~
G(
) = G
(
) holds, which means
is
the optimal solution of SCMs. Similarly, for any
R
n
satisfying
{
x
i
c
i
=
c
,
c = 1, . . . , k} and the constraints
of (7), we have ~
G() = G
(
) = ~
G(
), which
means is the optimal solution of DSVMs.
B.
PROOF OF THEOREM 3
Note that the feasible regions of (2) and (7) are the same.
By the fact that
and ~
are optimal solutions to (2) and
(7) respectively, we know that
( ~
)
T
G(
)
0
(12)
(
- ~
)
T
~
G( ~
)
0
(13)
hold, where the gradients
G() = Q - e and ~
G() = ~
Q
- e.
(14)
Adding (12) and (13) and then a little arrangement yields
( ~
)
T
[
~
G( ~
)
- ~
G(
)]
(~
)
T
[
G(
)
- ~
G(
)].
Substituting (14) in the above inequality, we get
( ~
)
T
~
Q( ~
)
(~
)
T
(Q
- ~
Q)
.
(15)
Adding ( ~
)
T
(Q
- ~
Q)( ~
) to the both sides of
(15), we have
( ~
)
T
Q( ~
)
(~
)
T
(Q
- ~
Q) ~
.
(16)
If > 0 is the smallest eigenvalue of Q, we have
~
2
(~
)
T
Q( ~
)
( ~
)
T
(Q
- ~
Q) ~
~
Q
- ~
Q
~
and
~
~
mC. Using (16) we get
~
~
mC
.
Now we turn to prove the second result.
is the optimal
solution of (2), therefore, 0
G(~
)
- G(
) is obvious.
Meanwhile, we have
G( ~
)
- G(
)= 1
2 ( ~
)
T
(Q
- ~
Q) ~
+ ~
G( ~
)
- G(
)
12(~
)
T
(Q
- ~
Q) ~
+ ~
G(
)
- G(
)
= 1
2 ( ~
)
T
(Q
- ~
Q) ~
- 12(
)
T
(Q
- ~
Q)
12 Q - ~Q ~
2
+ 1
2 Q ~
Q
2
(m
2
+ ~
m
2
)C
2
2
C.
PROOF OF THEOREM 4
Expanding to be the explicit function of
{(u
c
)
}
k
c=1
, we
get
2
= YKY
- ~
Y ~
K ~
Y
2
, in which Y and ~
Y denote diagonal
matrices whose diagonal elements are y
1
, . . . , y
n
and
~
y
1
, . . . , ~
y
n
respectively. Using the fact that Y equals to ~
Y,
we have
2
=
Y(K
- ~
K)Y
2
. Since Y only change the
signs of the elements of K
- ~
K by Y(K
- ~
K)Y, we have
2
=
K
- ~
K
2
=
k
h=1
k
l=1
x
i
h
x
j
l
( (x
i
), (x
j
)
(u
h
), (u
l
) )
2
. It is a biquadratic function of
{(u
c
)
}
k
c=1
.
Therefore, this is an unconstrained convex optimization problem
[4]. The necessary and sufficient condition for
{u
c
}
k
c=1
to be optimal is
2
(
{(u
c
)
}
k
c=1
) = 0. We can verify that
(u
c
) =
xic
(x
i
)
|
c
|
, c = 1, . . . , k satisfies the condition
that the gradient is zero.
D.
PROOF OF THEOREM 5
We define a n
k matrix Z as Z
ic
=
1
|
c
|
if x
i
c
0
otherwise
.
We can see that Z captures the disjoint cluster memberships.
There is only one non-zero entry in each row of Z and Z
T
Z =
I
k
holds (I
k
indicates the identity matrix). Suppose is
the matrix of the images of the samples in feature space,
i.e., = [(x
1
), . . . , (x
n
)]. We can verify that the matrix
ZZ
T
consists of the mean vectors of the clusters containing
the corresponding sample. Thus, the
2
can be written as
Q
- ~
Q
2
=
T
- (ZZ
T
)
T
ZZ
T 2
.
Using the fact that trace(A
T
A) = A
2
F
, trace(A + B) =
trace(A) + trace(B) and trace(AB) = trace(BA), we have
2
= trace((
T
)
T
T
- (Z
T
T
Z)(Z
T
T
Z)).
Since trace((
T
)
T
T
) is constant, minimizing is equivalent
to maximizing
J
1
= trace((Z
T
T
Z)(Z
T
T
Z)).
(17)
With similar procedure, we can see that minimizing J(
{
c
}
k
c=1
)
amounts to maximizing
J
2
= trace(Z
T
T
Z).
(18)
Matrix K =
T
is a symmetric matrix. Let
1
. . . ,
n
0 denote its eigenvalues and (v
1
, . . . , v
n
) be the corresponding
eigenvectors. Matrix H = Z
T
T
Z is also a
symmetric matrix.
Let
1
. . . ,
k
0 denote its
eigenvalues.
According to Poincar
e Separation Theorem,
we know the relations
i
i
, i = 1, . . . , k hold. Therefore
, we have J
2
=
k
i=1
i
k
i=1
i
. Similarly, we have
J
1
=
k
i=1
2
i
k
i=1
2
i
. In both cases, the equations hold
when Z = (v
1
, . . . , v
k
)R, where R is an arbitrary k
k orthonormal
matrix. Actually, the solution to maximizing J
2
is just the well-known theorem of Ky Fan (the Theorem 3.2.
of [11]). Note that the optimal Z might no longer conforms
to the definition of Z
ic
=
1
|
c
|
if x
i
c
0
otherwise
, but it is
still a orthonormal matrix. That is why it is called a relaxed
optimal solution.
450 | Support Vector Machines;concept modelling;Concept Modelling;Imbalance;Support Vector Machines (SVMs);Large Scale;Clustering;imbalanced data;kernel k-means;support cluster machines (SCMs);TRECVID;meta-algorithm;large scale data;shrinking techniques;clusters;Kernel k-means |
124 | Learning Query Languages of Web Interfaces | This paper studies the problem of automatic acquisition of the query languages supported by a Web information resource . We describe a system that automatically probes the search interface of a resource with a set of test queries and analyses the returned pages to recognize supported query operators. The automatic acquisition assumes the availability of the number of matches the resource returns for a submitted query. The match numbers are used to train a learning system and to generate classification rules that recognize the query operators supported by a provider and their syntactic encodings. These classification rules are employed during the automatic probing of new providers to determine query operators they support. We report on results of experiments with a set of real Web resources. | INTRODUCTION
Searching for relevant information is a primary activity
on the Web.
Often, people search for information using
general-purpose search engines, such as Google or Yahoo!,
which collect and index billions of Web pages. However,
there exists an important part of the Web that remains unavailable
for centralized indexing. This so-called "hidden"
part of the Web includes the content of local databases and
document collections accessible through search interfaces offered
by various small- and middle-sized Web sites, including
company sites, university sites, media sites, etc. According
to the study conducted by BrightPlanet in 2000 [6], the size
of the Hidden Web is about 400 to 550 times larger than the
commonly defined (or "Visible") World Wide Web. This
surprising discovery has fed new research on collecting and
organizing the Hidden Web resources [1, 2, 15, 17, 19].
Commercial approaches to the Hidden Web are usually in
the shape of Yahoo!-like directories which organize local sites
belonging to specific domains. Some important examples
of such directories are InvisibleWeb[1] and BrightPlanet[2]
whose gateway site, CompletePlanet[3], is a directory as
well as a meta-search engine. For each database incorporated
into its search, the meta-search engine is provided with
a manually written "wrapper", a software component that
specifies how to submit queries and extract query answers
embedded into HTML-formatted result pages.
Similar to the Visible Web, search resources on the Hidden
Web are highly heterogeneous. In particular, they use different
document retrieval models, such as Boolean or vector-space
models. They allow different operators for the query
formulation and, moreover, the syntax of supported operators
can vary from one site to another. Conventionally,
query languages are determined manually; reading the help
pages associated with a given search interface, probing the
interface with sample queries and checking the result pages
is often the method of choice.
The manual acquisition of Web search interfaces has important
drawbacks. First, the manual approach is hardly
scalable to thousands of search resources that compose the
Hidden Web. Second, the manual testing of Web resources
with probe queries is often error-prone due to the inability
to check results. Third, cases of incorrect or incomplete help
pages are frequent. Operators that are actually supported
by an engine may not be mentioned in the help pages, and
conversely, help pages might mention operators that are not
supported by the engine.
To overcome the shortcomings of the manual approach,
we address the problem of acquiring the query languages of
Web resources in an automatic manner. We develop a system
that automatically probes a resource's search interface
with a set of test queries and analyses the returned pages to
recognize supported query operators. The automatic acquisition
assumes the availability of the number of matches the
resource returns for a submitted query. The match numbers
are used to train a learning system and to generate classification
rules that recognize the query operators supported
by a provider and their syntactic encodings.
New technologies surrounding the XML syntax standard,
in particular Web Services [18], establish a new basis for automatic
discovery and information exchange and are becoming
widely employed in corporate applications.
However,
this has yet to happen for thousands of public information
providers. The question of when and how they will move
toward open cooperation using Web Service technologies remains
widely open [4]. Instead, the query-probing approach
for acquiring supported operators does not assume any cooperation
of Web providers; its only requirement is that they
1114
2004 ACM Symposium on Applied Computing
provide an accessible interface and allow queries to be run.
This paper is organized as follows. In Section 2 we discuss
the heterogeneity of Web interfaces; we formalize the problem
and show its connection with the concept of learning by
querying in Section 3. In Section 4 we design a classifier system
for the automatic acquisition of a query language and
investigate different aspects of the system. In Section 6 we
review the prior art; in Section 5 we present experimental
results to illustrate the performance of our system. Section 7
discusses open issues and Section 8 concludes the paper.
QUERYING WEB RESOURCES
Web resources vary considerably in the ways they retrieve
relevant documents. In the theory of information retrieval,
there exist at least five basic retrieval models, but only three
of these models are visible on the Web, namely the Boolean,
the extended Boolean and the vector-space models. In the
Boolean query model, a query is a condition, which documents
either do or do not satisfy, with the query result being
a set of documents. In the vector-space model, a query is
a list of terms, and documents are assigned a score according
to how similar they are to the query. The query result
is a ranked list of documents. A document in the query result
might not contain all query terms. Finally, the extended
Boolean model combines the advantages of both the Boolean
and the vector-space query model. In this model, keywords
can be preceded by special characters (like + and - ) requiring
an obligatory presence or absence of a given keyword
in a document. For example, the query +information
+provider will retrieve all documents containing both keywords
and rank them according to some similarity function.
Analysis of information providers suggests that the majority
of providers adopt one of the three basic models. Moreover
, beyond query answers, many resources report the number
of documents in their collections matching the query. If
a resource deploys the (extended) Boolean model, the match
number shows how many documents match the query. In the
case of the vector-space model, the match number refers to
documents containing at least one query term, thus being
equivalent to the Boolean disjunction.
In the following, we develop an approach for automatic
determination of query operators by reasoning on submitted
queries and corresponding match numbers. Though this
approach excludes resources that do not report match numbers
, other ways of automatic detection of query operators
appear even more problematic and difficult to implement. A
method based on downloading answer documents and verifying
the query against their content often fails, either for legal
reasons, when the content of documents is unavailable or
password-protected, or for technical reasons, when a query
matches millions of documents and downloading even a part
of them requires prohibitive time and network resources.
2.1
Query language model
A query language of a Web provider includes a set of basic
operators and the way of combining the operators to get
complex queries. Basic operators have different arities, in
particular, the default term processing and the unary and
binary operators. The default processing refers primarily
to case sensitivity in this paper, but we could also refer to
whether the query term is treated as a complete word or as a
substring in a possible matching document. Unary operators
include the Stem-operator, which replaces a query term with
its lexem; binary operators include the Boolean operators
(conjunction), (disjunction), and (negation)
1
and the
operator P hrase which requires the adjacency of all terms
in a document.
Some other operators, like substring matching or word
proximity operators have been studied in various systems,
however the six query operators mentioned above are by far
the ones most frequently supported by Web interfaces. In
the following, we develop a method to cope with the operator
set O = {Case, Stem, , , , P hrase}. Issues relevant
to the possible extension of set O with other operators are
delegated to Section 7.
2.2
Query interpretation
Web providers are queried by filling their search forms
with query strings. CGI or JavaScript code linked to the
query form interprets the query strings according to certain
rules.
These rules allow syntactic encodings for the supported
query operators. If correctly interpreted, the query
is executed on the document collection before a (full or partial
) answer is reported to the user.
Unfortunately, the same query operator may be encoded
differently by different providers. For example, the Boolean
conjunction is often encoded as A AND B , A B , or +A
+B , where A and B are query terms. Worse, two providers
can interpret the same query string differently. For example,
query string A B can be interpreted as a Boolean conjunction
, Boolean disjunction, or P hrase.
Example 1. To illustrate the problem, consider the query
string q = Casablanca AND Bogart .
On Google, AND
is interpreted as the Boolean conjunction, that is, i
Google
( Casablanca AND Bogart ) = Casablanca Bogart . As
a result, query q matches 24,500 pages at Google, as op-posed
to 551,000 for query q
1
= Casablanca and 263,000
for q
2
= Bogart . On the Internet Movie Database (IMDB)
(http://www.imdb.com/.), AND is taken literally and all
terms in a query are implicitly OR-connected. Therefore, the
IMDB interprets query q as follows: i
IM DB
( Casablanca
AND Bogart ) = Casablanca AND Bogart . The
query returns 12,020 matches documents on IMDB, as op-posed
to only 22 for q
1
= Casablanca and 4 for q
2
= Bogart .
If we investigate an unknown query language, then Example
1 shows that observing match numbers for probe queries
can provide a good insight into the supported operators.
However, no definitive decision appears possible from the
three queries above q, q
1
, q
2
. An accurate decision on supported
operators/syntaxes will require probing the provider
with other queries and comparing all match numbers in order
to confirm or reject various hypotheses.
Example 2. As in Example 1, let us compare match numbers
for the queries q= Casablanca AND Bogart , q
1
= Casablanca
, and q
2
= Bogart . For Google, the fact that q matches
less documents than any of q
1
and q
2
, favors the Conjunction-hypotheses
, but is still insufficient to exclude other hypotheses
, like that of P hrase. Probing Google with query q
3
=
Bogart AND Casablanca returns the same number of matched
documents as q. This (most likely) discards the P hrase-hypothesis
, but not the hypothesis Casablanca AND
1
Negation is a binary operator in Web query languages and
its interpretation is given by 'AND NOT', that is, A B is
a synonym for A B (the latter using the unary ).
1115
Bogart . To exclude this one, even more queries should be
sent to Google, like q
4
= Casablanca AND , and so on. Sim-ilarly
in IMDB, the fact that query q matches more documents
than q
1
and q
2
suggests that q is processed as a disjunction
, but it can not tell whether AND is taken literally
or ignored. A deeper analysis requires further probing
IMDB with, for example, queries q
4
= Casablanca AND or
q
5
= Casablanca Bogart to compare their match numbers to
the ones of previous queries and decide about the AND .
Our approach to the automatic acquisition of Web query
languages formalizes and generalizes the idea described in
Examples 1 and 2. We build a learning system that trains
a number of classifiers with data from manually annotated
sites to automatically determine supported operators and
their syntaxes at a new site. The training data from annotated
sites includes an ensemble of test queries together
with the corresponding match numbers.
PROBLEM DEFINITION
Assume an information provider P supports some or all
query operators in O; these operators form a set O
P
, O
P
O and allow us to compose a set of complex queries Q(O
P
).
For any operator o
i
O
P
, P accepts one or more syntactical
encodings, s
i1
, s
i2
, . . .. The set {s
ij
} of accepted syntaxes
for o
i
O
P
is denoted S
i
. The interpretation I
P
of operator
set O
P
is defined as I
P
= {(o
i
, s
ij
)|o
i
O
P
, s
ij
S
i
} =
{(o
i
, S
i
)|o
i
O
P
}. Interpretation I
P
is monovalued if each
operator has at most one syntax, i.e, |S
i
| = 1 for all o
i
O
P
.
I
P
is multivalued, if it allows multiple syntaxes for at least
one operator, i.e., o
i
O
P
such that |S
i
| > 1. In Google,
the Boolean conjunction can be encoded by both AND and
(whitespace). Therefore, for any query terms A and B,
both query strings A B and A AND B are interpreted
as A B. I
Google
contains (, AND ) and (,
) and is a
multivalued interpretation.
We distinguish between ambiguous and unambiguous interpretations
. A pair of distinct operator encodings (o
i
, s
ij
)
and (o
k
, s
kl
) is ambiguous if the two operators have the same
syntax: o
i
= o
k
but s
ij
= s
kl
. An interpretation I
P
is ambiguous
, if it contains at least one ambiguous pair of encodings
. An interpretation I is unambiguous, if for any pair of
encodings (o
i
, s
ij
) and (o
k
, s
kl
) in I, o
i
= o
k
s
ij
= s
kl
.
Ambiguous interpretations can be observed with Web providers
that interpret query strings dynamically, when the final
decision depends on results of the query execution with
different retrieval models
2
. However, the major part of Web
providers interpret query strings unambiguously and our
method copes with unambiguous interpretations only. Further
discussion on ambiguous interpretations is in Section 7.
Like with the query operators, we select the most frequent
syntaxes on the Web, S = { Default
3
, ,
, AND , + ,
OR , NOT , - , "" (quote marks)}. Like set O, these
syntaxes have been selected after verification of hundreds
of Web providers. Set S is easily extendable to alternative
syntaxes, like ones employed by non-English providers. For
2
Citeseer at http://citeseer.nj.nec.com/cs is an example
of ambiguous interpretation. By default, it interprets A
B as a conjunction; however if A B matches zero documents
, the query is interpreted as disjunction.
3
'Default' refers to the absence of any syntax; it assumes
the processing of plain terms.
example, French providers may use ET for the Boolean
conjunction and OU for the disjunction.
The theoretical framework for the query language acquisition
is derived from the learning of an unknown concept
by querying [5]. Assume that provider P supports the basic
operators in O; complex queries composed from the basic
operators form a set Q(O). For the document collection at
P , query q Q(O) constrains a subset P (q) of documents
matching q. An abstract query q Q(O) is mapped into
a textual string with a mapping M : O 2
S
that defines
(possibly multiple) syntaxes for operators in O. The mapping
of a complex query q is denoted m(q), the set of mapped
queries is denoted Q(S) = Q(M (O)).
The sets O and S are assumed to be known, whereas the
mapping M is unknown. We are given an oracle that can
be queried with a mapped query m(q) Q(S) on the size
of subset P (q), oracle(m(q)) = |P (q)|. By observing the or-acle's
responses to queries, the learning system should produce
a hypothesis on the mapping M , which should be as
close as possible to the correct one.
The identification of the mapping M may be simple under
certain circumstances. Below we show an example of reconstruction
when O
P
includes a particular subset of operators
and the oracle is noiseless.
Example 3. Let O include the three Boolean operators
(, and ) and P hrase. Then, for a given syntax set S,
any unambiguous mapping M : O 2
S
can be exactly identified
if the oracle is noise-less
4
. In such a case, subset sizes
returned by the oracle fit the Boolean logic on sets.Indeed,
when querying the oracle with terms A and B and syntaxes
from S, the disjunction is distinguishable from other operators
by the fact that it constrains bigger subsets in a collection
than any of terms does:
|A B| |A|, |A B| |B|
(1)
Furthermore, among three other operators, the conjunction
is recognized by its commutativity:
|A B| = |B A|
(2)
Finally, the difference between negation and phrases is detected
by the basic equation linking three Boolean operators:
|A B| = |AB| + |A B| + |BA|
(3)
Sizes of subsets constrained by the Boolean operators satisfy
the disequation (1) and equations (2), (3) for any pair of
A and B, so one can easily design a learning system that
exactly identifies an unambiguous mapping M after only a
few probing queries.
Unfortunately, easy identification of the mapping M is
rather an exception on the real Web, where few if any of the
assumptions made in Example 3 become true. First, any
change in the operator set O
p
makes the exact reconstruction
less obvious. If the conjunction and/or disjunction are
not supported, then the size of A B (or A B) is unavailable
and equation (3) cannot help distinguish negation
from phrases. In cases like this, the identification of supported
syntaxes requires an analysis of the semantic correlation
between query terms A and B and guessing on their
co-occurrence in (unknown) document collections.
4
Oracle noiseless assumes the pure Boolean logics, with no
query preprocessing, like the stopword removal.
1116
Second, Web query interfaces that play the role of oracles
and return sizes of subsets constrained by queries m(q)
Q(S) are rarely noiseless.
When probing interfaces with
test queries, the match numbers may violate equations (2)
and (3). Most violations happen because converting query
strings into queries on collections hides the stop-word removal
and term stemming. It is not clear, whether queries
like A AND B are interpreted as one (A is a stopword),
two, or three terms. Moreover, for the performance reasons,
real match numbers are often replaced by their estimations
which are calculated using various collection statistics [13],
without the real retrieval of documents matching the query.
LEARNING SYSTEM
To automatically determine supported query operators,
we reduce the overall problem to a set of classification tasks,
where each task is associated with recognizing a specific
query operator or syntax, and where some standard learning
algorithms like SVM, k-nearest neighbors or decision trees
can be applied. To build the classifiers, we collect and annotate
a set of Web providers. We develop a set of test queries
and probe all selected providers with the test queries. We
train the classifiers with query matches for test queries. For
any new provider, we first probe it with the test queries.
Query matches returned by the provider upon test queries
are used to automatically classify operators and syntaxes
and produce an unambiguous interpretation for P .
To achieve a good level of classification accuracy, we investigate
different aspects of the learning system including
the target function, probe queries, data preparation, and
feature encoding and selection.
4.1
Target function
Due to the multivalued relationships between query operators
and syntaxes, the target function for our learning
system has two alternatives, one for the direct mapping M
and the other one for the inverted mapping M
-1
:
T
1
: O 2
S
. T
1
targets the unknown mapping M ;
it assigns zero or more syntaxes to each operator in
O. T
1
builds a multi-value classifier for every o
i
O,
or alternatively, a set of binary classifiers for all valid
combinations (o
i
, s
j
), o
i
O, s
j
S(o
i
).
T
2
: S O. T
2
targets the inverted mapping M
-1
; it
assigns at most one operator to every syntax s
j
S.
Either target function gets implemented as a set of classifiers
, operator classifiers for T
1
or syntax classifiers for T
2
.
Classifiers are trained with match numbers for probe queries
from annotated providers.
For a new provider P , either
function produces a hypothesis I
T
(P ) that approximates
the real interpretation I
P
. The major difference between
T
1
and T
2
is that the former can produce ambiguous interpretations
, while the output of T
2
is always unambiguous.
Indeed, two operator classifiers with T
1
can output the same
syntax leading to ambiguity, while each classifier in T
2
outputs
at most, one operator for one syntax. In experiments
we tested both functions, though when building the learning
system we put an emphasis on T
2
, which is free of ambiguity.
To build syntax classifiers for the target function T
2
, we
should consider beyond "good" classification cases for the
operators in O and include some "real-world" cases where
providers process syntaxes in S literally or simply ignore
them. For certain providers, it is difficult to find any valid
interpretation. In the learning system, we extend the set
of possible interpretations of syntaxes in S by three more
cases, O = O{Ignored, Literal, U nknown}. Syntaxes in
S have different alternatives for their interpretation; below
we revisit some syntaxes and report possible matches in O
as they are specified in the learning system.
Default : Case sensitivity for query terms: possible values
are case-insensitive (Case) or case-sensitive (Literal).
* : This unary operator can be interpreted as Stem, when
i(A*) = Stem(A), Ignored when i(A*) = i(A), and
Literal, when A* is accepted as one term.
: Whitespace is often a default for another syntax in
S. Three possible interpretations include the Boolean
conjunction when i( A B )= A B, the Boolean disjunction
when i( A B )= A B, and P hrase when
i( A B )= P hrase (A,B).
AND : Three alternatives here are the conjunction when
i( A AND B )= A B, Ignored, when AND is ignored
and the interpretation goes with the whitespace
meaning, i( A AND B )= i( A B )= M
-1
(' ')
(A, B), and Literal when i( A AND B )= M
-1
(
)
(A, AND ,B).
" " (Quote marks): Two possible interpretations are P hrase,
when i( "A B" )= P hrase(A,B), and Ignore when quote
marks are ignored and terms are interpreted with the
whitespace, i( "A B" )= i( A B ) = M
-1
(
) (A, B).
A similar analysis is done for the syntaxes + ,
OR ,
NOT and - . Additionally, all syntaxes for binary operators
can be labeled as U nknown.
4.2
Probing with test queries
To train syntax classifiers for T
2
, we collect data from annotated
sites by probing their interfaces and extracting the
match numbers. Probing has a fairly low cost, but requires
a certain policy when selecting terms for test queries to provide
meaningful data for the learning. We define a set R
of model queries that contain syntaxes in S and parameter
terms A and B, which are later bound with real terms.
We form the set R by first selecting well-formed queries
that contain all syntaxes we want to classify. Second, we add
queries that are term permutations of previously selected
queries, for example the permutation B A for query A
B . Finally, we add model queries that are not well-formed,
but appear helpful for building accurate classification rules.
Below, the set R of model queries is illustrated using the
pair of terms A and B; model queries are split into three
groups containing one, two or three words:
One word queries: A , B ,UpperCase(A), A* , Stem(A).
Two word queries: A B , B A , "A B" , "B A" , +A
+B , +B +A , A -B , A AND , A OR , A NOT .
Three word queries: A AND B , B AND A , A OR
B , B OR A , A NOT B , B NOT A .
In total, the set R is composed of 22 model queries, all in
lower case, except UpperCase (A), which is an upper case of
term A. Six queries in R are permutations of other queries
1117
and three queries are (purposely) not well-formed. These
queries A AND , A OR , A NOT are selected to help
detect Literal-cases for AND , OR , NOT .
Probe queries are obtained from the model queries by replacing
parameters A and B with specific query terms, like
knowledge and retrieval . These 22 probe queries form
a probe package denoted R
A,B
.
For a provider P , probe
queries together with corresponding match numbers form
the elementary feature set F
0
A,B
= {(m(q
i
), oracle(P (q
i
))),
w(q
i
) R
A,B
}. Query terms are selected from a generic
English vocabulary with all standard stopwords excluded.
One site can be probed with one or more probe packages,
all packages using different term pairs (A,B).
To probe the sites with test queries, we bind model queries
in R with query terms. To obtain meaningful training data,
query terms should not be common stopwords, such as and
or the . As the term co-occurrence in a provider's document
collection is unknown, we select pairs with different degrees
of semantic correlation. Here, the term pairs fall into three
categories:
C
1
: terms that form a phrase (such as A= information
and B= retrieval );
C
2
: terms that do not form a phrase but occur in the
same document ( knowledge and wireless );
C
3
: terms that rarely occur in the same document
(such as cancer and wireless ).
These three categories can be expressed through term co-occurrence
in some generic document collection P
G
.
We
re-use our query probing component to establish criteria for
term selection for the three categories. A pair of terms (A,
B) is in category C
1
(phrase co-occurrence) if the match
number for P hrase(A, B) is comparable with the conjunction
A B, that is
|P
G
(P hrase(A,B))|
|P
G
(AB)|
> , for some threshold
0 < < 1. A term pair (A, B) is in category C
2
(high co-occurrence
) if the terms are not co-occurred in a phrase,
but their conjunction is comparable with either A or B,
|P
G
(AB)|
min{|P
G
(A)|,|P
G
(B)|}
> , for some 0 < < 1. If pair (A,B)
does not fit the conditions for categories C
1
and C
2
, then it
is in category C
3
(low co-occurrence). For our experiments,
we have selected Google as generic document collection G
and set the values of and both to 0.01.
4.3
Elementary features
Match numbers for probe queries in F
0
A,B
represent elementary
features that can be directly used to train classifiers
. Unfortunately, this often leads to poor results. The
reason is that Web resources considerably differ in size and,
therefore, the query matches from different resources are of
different magnitude and thus hardly comparable. A query
may match millions of documents on Google, but only a
few at a small local resource. To leverage the processing of
query matches from resources of different size, we develop
two alternative methods for the feature encoding.
In the first approach, we normalize the query matches in
F
0
by the maximum number of matches for the two basic
queries A and B . We thus obtain features F
1
with values
mostly between 0 and 1 (except for queries related to
the Boolean disjunction). The second approach F
2
to the
feature encoding, uses the "less-equal-greater"-relationship
between any two probe queries in a probe package. This
produces a three-value feature for each pair of test queries.
4.4
Feature selection
The refinement of raw features produces l=22 refined real
value features with F
1
and
l(l-1)
2
= 231 three-value features
with F
2
. The basic approach is to train each classifier with
the entire feature set F
0
, F
1
or F
2
. However, because of the
noise in the data, building accurate classifiers may require a
lot of training data. To control the amount of training data
and enhance the quality of classification rules, we proceed
with two methods of feature selection. First, we distinguish
between relevant and irrelevant features for a given classifier
and remove irrelevant ones. Second, beyond the direct
feature filtering, we use prior knowledge and classify new
syntaxes using previously classified ones.
Removing irrelevant features. The definition of relevant
features requires establishing syntactical dependencies
between model queries in R and semantic relationships between
syntaxes in S. Model query r
i
R syntactically depends
on model query r
j
if r
i
includes syntaxes present in
r
j
. Syntaxes s
i
and s
j
in S are semantically related if they
can be interpreted with the same operator in O.
We define the relevant feature set F
i
for syntax s
i
as containing
three parts, F S(s
i
) = F S
i
= F S
0
i
+ F S
1
i
+ F S
2
i
.
F S
0
i
simply contains all model queries r
j
R that involve
syntax s
i
, for example F S
0
( AND )= { A AND B , B AND
A , A AND }. Next, F S
1
i
contains model queries for syntactically
dependent syntaxes. Actually, F S
1
i
contains the
two model queries A B and B A for all binary syntaxes.
Finally, F S
2
1
contains the model queries for semantically related
syntaxes. For example, F S
2
( AND ) = F S
0
( + ), and
vice versa, F S
2
( + )=F S
0
( AND ).
Use of prior knowledge. Beyond removing irrelevant
features, it is possible to benefit from the dependencies between
syntaxes established in Section 4.1. For example, the
Literal-cases for OR and AND depend on the interpretation
of whitespaces. The classification of AND as Literal
becomes simpler when the system already knows that, for
example,
is interpreted as conjunction. To use the prior
knowledge, we alter the training and classification process.
We impose an order on the syntaxes in S. When training
or using syntax classifiers, we use the classification results
of previous syntaxes.
We convert the syntax set in the ordered list S
O
= (Default
, ,
, "" , AND , + , OR ', NOT , - ) and impose
the order on how the classifiers are trained and used
for the classification. In the prior knowledge approach, the
feature set used to train the classifier for syntax s
i
S
O
will
include the classifications of all s
j
preceding s
i
in S
O
.
Removing irrelevant features and using prior knowledge
are two independent methods for feature selection and can
be applied separately or together. This allows us to consider
four feature selection methods for training classifiers and
classifying new sites:
1. Full feature set, F f s
i
= F , where F is a selected feature
encoding, F
0
, F
1
or F
2
;
2. Relevant feature set, Rf s
i
= F S
i
;
3. Prior knowledge features, P Kf s
i
=F M
-1
(s
j
), j < i.
4. Relevant prior knowledge feature set RP Kf s
i
= F S
i
M
-1
(s
j
), j < i.
EXPERIMENTAL EVALUATION
To run experiments, we collected and annotated 36 Web
sites with search interfaces. All sites report the match num-1118
bers for user queries and unambiguously interpret their query
languages. Selected sites represent a wide spectrum of supported
operator sets. For each site, we annotated all supported
operators and their syntaxes. For the extraction of
the match numbers from HTML pages we used the Xerox
IWrap wrapper toolkit [7, 12]. Out of 36 providers, only 4
support monovalued interpretations; in the other 32 cases,
at least one operator has two or more syntaxes.
Figure 1: T
1
and T
2
target functions.
Figure 2: Three feature encodings for DT, KNN and
SVM.
5.1
Experimental framework
In all experiments we estimate the classification accuracy
for the individual operators in O (with T
1
) and the syntaxes
in S (with T
2
).
We also estimate the mean accuracy
for the target functions T
1
and T
2
. Experiments are
conducted using the cross-validation method. 36 annotated
sites are split into N =9 groups, S
1
, S
2
,. . . , S
N
. We run N
experiments; in experiment i, classifiers are trained with the
groups S
1
,. . . ,S
i-1
, S
i+1
,. . . ,S
N
and then tested with sites
from group S
i
. Accuracy (precision) values over N experiments
are averaged for each operator/syntax classifier and
form the individual accuracies. The average of individual
accuracies over O/S gives the mean accuracy.
We test the learning system by varying the system parameters
introduced in Section 4. We train and test classifiers
with three different learning algorithms: decision trees
from Borgelt's package (DT), k-nearest neighbors algorithm
(KNN), and support vector machines (SVM)
5
. The following
list recalls the test parameters and possible options.
1. Target function: T
1
includes |O |=9 operator classifiers
; multivalued interpretations are implemented as
classifications with subsets of |O |. For T
2
, the system
includes |S|=9 syntax classifiers.
2. Feature encoding: The three different feature encodings
(see Section 4.3) include the raw match numbers
given by F
0
, the normalized match numbers given by
F
1
, and three-value feature comparison given by F
2
.
3. Feature selection: The four methods presented in Section
4.4 include Ffs (full feature set), Rfs (relevant
feature set), PKfs (prior knowledge feature set) and
RPKfs (relevant prior knowledge feature set).
4. Term selection: We test three term selection categories
, C
1
, C
2
and C
3
introduced in Section 4.2. Additionally
, we test the mixture of the three categories,
when three term pairs are used to probe a site, i.e. one
term pair from each category C
1
, C
2
and C
3
.
Experiments have been run for all parameter combinations
; most combinations achieve mean accuracy superior to
60%. The four system parameters appear to be uncorrelated
in their impact on the classification results. To figure out
the most interesting ones, we determine overall "winners"
for each parameter, except for the learning algorithm. The
winners are T
2
target function, F
2
feature encoding, and
M ixed term selection.
RP Kf s feature selection behaves
best for DT and KNN and P Kf s feature selection is the
winner for SVM. We report more detail below.
5.2
Experimental Results
Decision trees are induced by the Borgelt's software; they
are then pruned using the confidence level (p=0.5) pruning
method. In SVM, linear kernel functions have been used.
For the KNN method, we report results for k=3 which behaves
better that k=1, 5 and 10. Because the implemen-tation
of the KNN algorithm cannot process large sets of
features, we were not able to test the Ffs and PKfs feature
selection methods.
All three learning algorithms show a similar performance.
3NN slightly outperforms DT and SVM for the "winner"
combination (86.42% against 84.26% and 79.74%), however
it is often less accurate with other parameter combinations.
5
Available
at
http://fuzzy.cs.uni-magdeburg.de/
borgelt/software.html,
http://www.dontveter.com/
nnsoft/nnsoft.html,
http://svmlight.joachims.org/,
respectively.
1119
Target functions and feature selection. The target functions
T
1
and T
2
implement alternative approaches to the
query language acquisition; T
1
uses operator classifiers while
T
2
uses syntax classifiers. As seen in Section 4, T
2
has an
advantage over T
1
because it avoids multivalued classification
and outputs only unambiguous interpretations, while
the output of T
1
should be further tested for unambiguity.
Thus we have built the learning system for T
2
. Series of
experiments conducted with T
1
and T
2
confirm the superiority
of T
2
. As operator classifiers in T
1
are trained indepen-dently
, their combined output does not guarantee unambiguity
. Unlike T
2
, high accuracy of individual classifiers may
not be translated into global good accuracy, because one
misclassification may produce an ambiguous interpretation
and undermine the good performance of other classifiers.
In practice, we test the output of operator classifiers of
T
1
and discard those that form ambiguous interpretations.
This gives a 2% to 10% drop in the mean accuracy. Figure 1
plots mean accuracies for T
1
and T
2
for all feature selection
methods (with fixed F
2
feature encoding and M ixed term
selection) and the three learning methods (only Rfs and RPKfs
could be measured for 3NN algorithm). Within feature
selection methods, keeping relevant features spurs the performance
of DT and 3NN better than the prior knowledge,
with their combination being the winner. For SVM, instead,
adding prior knowledge to the full feature set is the best
choice. In the following, all reported experiments refer to
the target function T
2
.
Feature encoding. Previous figures compared the mean
accuracies. We unfold the mean value and plot individual
accuracies for the syntaxes in S. Figure 2 plots accuracy values
for the three feature encoding methods (for T
2
-RP Kf sM
ixed combination for DT and 3NN and T
2
-P Kf s-M ixed
combination for SVM). As the figure shows, the pair-wise
comparison F
2
performs best with respect to the raw and
normalized match numbers.
Term selection. We complete the analysis of system parameters
by testing four methods of term selection. They
include categories C
1
, C
2
and C
3
and M ixed. Figure 3 plots
mean accuracies for all learning algorithms and four term selection
methods, giving M ixed as the winner.
Figure 3:
Four term selection methods for DT,
KNN, and SVM.
5.3
Bias in training data
Among the syntaxes in S, all methods show only little difference
for the unary operators Def ault and Case. Among
the syntaxes for binary operators, certain (
, AND and
+ ) are easier to detect than others ( " " , OR and NOT ).
However, this phenomenon is not linked to the nature of the
operators or their syntaxes, but rather can be explained by
the bias in training data. In Table 1, we unfold the individual
accuracies and show results for each case (s, o), s
S, o O in the annotated data. Each non-empty cell in
Table 1 reports the occurrence of the case (in brackets) and
its classification accuracy. We can observe a definitive bias
of high accuracy for more frequent cases; instead, rare cases
have a very low accuracy. This explains good results for
,
AND and + , where occurrences are fairly split between
two main cases. For other syntaxes instead, the high error
ratio for rare cases decreases the individual accuracy.
RELATED WORK
The Hidden Web has emerged as a research trend and different
research groups have started to address various problems
relevant to organizing Hidden Web resources [15, 14,
17, 19, 20].
One focus is crawling; [17] presents a task-specific
and human-assisted approach to the crawling of the
Hidden Web.
The crawler searches for Hidden Web resources
relevant to a specific domain. The identification is
achieved by selecting domain-specific keywords from a fuzzy
set and assigning them to elements of HTML forms; the
resource is judged relevant if it returns valid search results.
Another important task is the classification of Hidden
Web resources. [15, 14] and [19] have developed approaches
to this problem based on query probing.
Moreover, [15]
makes use of the number of documents matching a submitted
conjunction query, as does our approach.
Instead of
query languages, they use match numbers to reason about
the relevance of a provider for a given category.
Originally, the query probing has been used for the automatic
language model discovery in [9], it probed documents
in a collection to analyze the content of the result pages.
[14] extends the work in [15] to the problem of database
selection by computing content summaries based on probing
. Once query language interfaces are understood, meaningful
query transformation becomes possible. [11] describes
one way of transforming a front-end query into subsuming
queries that are supported by the sources and a way to filter
out incorrectly returned documents. In [16], interaction
with online-vendors is automated.
In the close domain of meta-searching, the declaration of a
search resource's query features is often coupled with methods
of converting/translating meta-search queries into the
resource's native queries. Considerable research effort has
been devoted to minimizing the possible overheads of query
translation when the meta-search and search resource differ
in supporting basic query features [11]. In all these methods,
the manual discovery of the query features is assumed.
In information mediation systems that query Web resources
to materialize views on hidden data [20], one approach is to
reconstruct an image of a resource's database. Because of
a restricted Web interface, a critical problem is the entire
or partial reconstruction of the database image without the
unnecessary overload of the Web servers. [8] builds efficient
query covers that are accessible through nearest-neighbor
interfaces for the specific domain of spatial databases.
OPEN QUESTIONS
The experiments have raised a number of open questions
that require further research. Below we list some of them.
Stopwords. In tests, common English stopwords were
excluded from probing. However, the set of stopwords is
1120
Operators
Syntaxes
default
""
AND
OR
+
NOT
Case
97.6(20)
Stemming
41.1(9)
Conjunction
92.9(15)
95.2(27)
100(16)
Disjunction
100(19)
87.8(17)
Negation
91.0(15)
94.9(26)
Phrase
0(1)
90.4(28)
Literal
100(16)
79.6(7)
91.7(11)
69.2(14)
Ignored
79.9(27)
18.5(4)
3.7(1)
55.6(4)
100(19)
25.0(4)
81.0(7)
Unknown
0(1)
4.1(4)
0(1)
19.4(4)
0(1)
0(3)
0(3)
Table 1: Classification accuracy and occurrence for all syntax+interpretation cases (DT,T
2
,F
2
,RPKfs,M ixed).
often domain-dependent; this should be taken into account
when generating test queries. A more difficult case is when
a resource treats terms as stopwords in a certain context.
For example, Google accepts the term "WWW" when it is
queried alone and ignores it when the term is conjuncted
with other terms. Such query-dependent treatment of stopwords
is considered as noise in the current system.
Acquiring other operators. We have addressed the
set of most frequently used query operators. Other operators
defined by existing document retrieval models, like
proximity operators, can be added to the operator set and
processed in a similar manner. Two remarks concerning less
frequent operators are that their syntactical encodings may
vary even more than for Boolean operators, and, more im-portantly
, finding sufficient training data to build reliable
classifiers may be technically difficult.
Query composition. The next issue is the manner in
which basic query operators are combined to form complex
queries. The most frequent manner on the Web is the use
of parentheses or a certain operator priority. How to detect
this remains an open problem at this point.
Ambiguous interpretations. Recognizing ambiguous
interpretations is the most difficult problem. One example
is Citeseer, which interprets whitespaces as conjunction by
default, but switches to disjunction if the conjunction query
matches no documents. Some other Web providers behave
in the same or a similar manner. We will need to extend
the learning system to include a possibility of triggering the
retrieval model as a function of the oracle answers.
CONCLUSION
We have addressed the problem of automatic recognition
of operators and syntaxes supported by query languages of
Web resources. We have developed a machine learning approach
based on reformulation of the entire problem as a set
of classification problems. By introducing various refined
solutions for the target function, feature encoding, and feature
selection, we have achieved 86% mean accuracy for the
set of the most frequent operators and syntaxes. Further
improvement in the accuracy is possible with better preparation
of annotated sites, but this is limited because of the
complexity of the a-priori unknown operator composition
and the noise produced by the hidden query preprocessing.
REFERENCES
[1] The InvisibleWeb, http://www.invisibleweb.com/.
[2] BrightPlanet, http://www.brightplanet.com/.
[3] CompletePlanet, http://www.completeplanet.com/.
[4] G. Alonso. Myths around web services. IEEE Bulletin
on Data Engineering, 25(4):39, 2002.
[5] D. Angluin. Queries and concept learning. Machine
Learning, 2(4):319342, 1987.
[6] M. K. Bergman. The Deep Web: Surfacing hidden
value. Journal of Electronic Publishing, 7(1), 2001.
[7] D. Bredelet and B. Roustant. Java IWrap: Wrapper
induction by grammar learning. Master's thesis,
ENSIMAG Grenoble, 2000.
[8] S. Byers, J. Freire, and C. T. Silva. Efficient
acquisition of web data through restricted query
interfaces. In Proc. WWW Conf., China, May 2001.
[9] J. P. Callan, M. Connell, and A. Du. Automatic
discovery of language models for text databases. In
Proc. ACM SIGMOD Conf., pp. 479490, June 1999.
[10] C.-C. K. Chang and H Garcia-Molina. Approximate
query translation across heterogeneous information
sources. In Proc. VLDB Conf., pp. 566577, Cairo,
Egypt, September 2000.
[11] C.-C. K. Chang, H. Garcia-Molina, and A. Paepcke.
Boolean query mapping across heterogeneous
information sources. IEEE TKDE, 8(4):515521, 1996.
[12] B. Chidlovskii. Automatic repairing of web wrappers
by combining redundant views. In Proc. of the IEEE
Intern. Conf. Tools with AI, USA, November 2002.
[13] L. Gravano, H. Garcia-Molina, and A. Tomasic. Gloss:
Text-source discovery over the internet. ACM TODS,
24(2):229264, 1999.
[14] P. G. Ipeirotis and L. Gravano. Distributed search
over the hidden web: Hierarchical database sampling
and selection. In Proc. VLDB Conf., pp. 394405,
Hong Kong, China, August 2002.
[15] P. G. Ipeirotis, L. Gravano, and M. Sahami. Probe,
count, and classify: Categorizing hidden-web
databases. In Proc. ACM SIGMOD Conf., pp. 6778,
Santa Barbara, CA, USA, May 2001.
[16] M. Perkowitz, R. B. Doorenbos, O. Etzioni, and D. S.
Weld. Learning to understand information on the
internet: An example-based approach. Journal of
Intelligent Information Systems, 8(2):133153, 1997.
[17] S. Raghavan and H. Garcia-Molina. Crawling the
hidden web. In Proc. VLDB Conf., pp. 129138,
Rome, Italy, September 2001.
[18] D. Tsur. Are web services the next revolution in
e-commerce? In Proc. VLDB Conf., pp. 614617,
Rome, Italy, September 2001.
[19] W. Wang, W. Meng, and C. Yu. Concept hierarchy
based text database categorization. In Proc. Intern.
WISE Conf., pp. 283290, China, June 2000.
[20] R. Yerneni, C. Li, H. Garcia-Molina, and J. Ullman.
Computing capabilities of mediators. In Proc. ACM
SIGMOD Conf., pp. 443454, PA, USA, June 1999.
1121
| query operators;automatic acquisition;learning;hidden web;search interface;web resources;machine learning;search engine;query languages;Hidden Web;web interfaces |
125 | Learning Spatially Variant Dissimilarity (SVaD) Measures | Clustering algorithms typically operate on a feature vector representation of the data and find clusters that are compact with respect to an assumed (dis)similarity measure between the data points in feature space. This makes the type of clusters identified highly dependent on the assumed similarity measure. Building on recent work in this area, we formally define a class of spatially varying dissimilarity measures and propose algorithms to learn the dissimilarity measure automatically from the data. The idea is to identify clusters that are compact with respect to the unknown spatially varying dissimilarity measure. Our experiments show that the proposed algorithms are more stable and achieve better accuracy on various textual data sets when compared with similar algorithms proposed in the literature. | INTRODUCTION
Clustering plays a major role in data mining as a tool
to discover structure in data. Object clustering algorithms
operate on a feature vector representation of the data and
find clusters that are compact with respect to an assumed
(dis)similarity measure between the data points in feature
space. As a consequence, the nature of clusters identified by
a clustering algorithm is highly dependent on the assumed
similarity measure. The most commonly used dissimilarity
measure, namely the Euclidean metric, assumes that the dissimilarity
measure is isotropic and spatially invariant, and
it is effective only when the clusters are roughly spherical
and all of them have approximately the same size, which is
rarely the case in practice [8]. The problem of finding non-spherical
clusters is often addressed by utilizing a feature
weighting technique. These techniques discover a single set
of weights such that relevant features are given more importance
than irrelevant features. However, in practice, each
cluster may have a different set of relevant features. We
consider Spatially Varying Dissimilarity (SVaD) measures
to address this problem.
Diday et. al. [4] proposed the adaptive distance dynamic
clusters (ADDC) algorithm in this vain. A fuzzified version
of ADDC, popularly known as the Gustafson-Kessel (GK)
algorithm [7] uses a dynamically updated covariance matrix
so that each cluster can have its own norm matrix. These algorithms
can deal with hyperelliposoidal clusters of various
sizes and orientations. The EM algorithm [2] with Gaussian
probability distributions can also be used to achieve similar
results. However, the above algorithms are computationally
expensive for high-dimensional data since they invert covariance
matrices in every iteration. Moreover, matrix inversion
can be unstable when the data is sparse in relation to the
dimensionality.
One possible solution to the problems of high computation
and instability arising out of using covariance matrices
is to force the matrices to be diagonal, which amounts to
weighting each feature differently in different clusters. While
this restricts the dissimilarity measures to have axis parallel
isometry, the weights also provide a simple interpretation of
the clusters in terms of relevant features, which is important
in knowledge discovery. Examples of such algorithms are
SCAD and Fuzzy-SKWIC [5, 6], which perform fuzzy clustering
of data while simultaneously finding feature weights
in individual clusters.
In this paper, we generalize the idea of the feature weighting
approach to define a class of spatially varying dissimilarity
measures and propose algorithms that learn the dissimilarity
measure automatically from the given data while
performing the clustering. The idea is to identify clusters
inherent in the data that are compact with respect to the
unknown spatially varying dissimilarity measure. We compare
the proposed algorithms with a diagonal version of GK
(DGK) and a crisp version of SCAD (CSCAD) on a variety
of data sets. Our algorithms perform better than DGK and
CSCAD, and use more stable update equations for weights
than CSCAD.
The rest of the paper is organized as follows. In the next
section, we define a general class of dissimilarity measures
611
Research Track Poster
and formulate two objective functions based on them. In
Section 3, we derive learning algorithms that optimize the
objective functions. We present an experimental study of
the proposed algorithms in Section 4. We compare the performance
of the proposed algorithms with that of DGK and
CSCAD. These two algorithms are explained in Appendix A.
Finally, we summarize our contributions and conclude with
some future directions in Section 5.
SPATIALLY VARIANT DISSIMILARITY (SVAD) MEASURES
We first define a general class of dissimilarity measures
and formulate a few objective functions in terms of the given
data set. Optimization of the objective functions would result
in learning the underlying dissimilarity measure.
2.1
SVaD Measures
In the following definition, we generalize the concept of
dissimilarity measures in which the weights associated with
features change over feature space.
Definition 2.1 We define the measure of dissimilarity of
x from y
1
to be a weighted sum of M dissimilarity measures
between x and y where the values of the weights depend
on the region from which the dissimilarity is being measured
. Let P = {R
1
, . . . , R
K
} be a collection of K regions
that partition the feature space, and w
1
, w
2
, . . ., and w
K
be
the weights associated with R
1
, R
2
, . . ., and R
K
, respectively.
Let g
1
, g
2
, . . ., and g
M
be M dissimilarity measures. Then,
each w
j
, j = 1, . . . , K, is an M -dimensional vector where its
l-th component, w
jl
is associated with g
l
. Let W denote the
K-tuple (w
1
, . . . , w
K
) and let r be a real number. Then, the
dissimilarity of x from y is given by:
f
W
(x, y)
=
M
l=1
w
r
jl
g
l
(x, y), if y R
j
.
(1)
We refer to f
W
as a Spatially Variant Dissimilarity (SVaD)
measure.
Note that f
W
need not be symmetric even if g
i
are symmetric
. Hence, f
W
is not a metric. Moreover, the behavior
of f
W
depends on the behavior of g
i
. There are many ways
to define g
i
. We list two instances of f
W
.
Example 2.1 (Minkowski) Let
d
be the feature space and
M = d. Let a point x
d
be represented as (x
1
, . . . , x
d
).
Then, when g
i
(x, y) = |x
i
- y
i
|
p
for i = 1, . . . , d, and p 1,
the resulting SVaD measure, f
M
W
is called Minkowski SVaD
(MSVaD) measure. That is,
f
M
W
(x, y)
=
d
l=1
w
r
jl
|x
l
- y
l
|
p
, if y R
j
.
(2)
One may note that when w
1
= = w
K
and p = 2, f
M
W
is the weighted Euclidean distance. When p = 2, we call f
M
W
a Euclidean SVaD (ESVaD) measure and denote it by f
E
W
.
1
We use the phrase "dissimilarity of x from y" rather than
"dissimilarity between x and y" because we consider a general
situation where the dissimilarity measure depends on
the location of y. As an example of this situation in text
mining, when the dissimilarity is measured from a document
on `terrorism' to a document x, a particular set of keywords
may be weighted heavily whereas when the dissimilarity is
measured from a document on `football' to x, a different set
of keywords may be weighted heavily.
Example 2.2 (Cosine) Let the feature space be the set
of points with l
2
norm equal to one. That is,
x
2
= 1
for all points x in feature space. Then, when g
l
(x, y) =
(1/d - x
l
y
l
) for l = 1, . . . , d, the resulting SVaD measure
f
C
W
is called a Cosine SVaD (CSVaD) measure:
f
C
W
(x, y)
=
d
i=1
w
r
jl
(1/d - x
l
y
l
), if y R
j
.
(3)
In the formulation of the objective function below, we use
a set of parameters to represent the regions R
1
, R
2
, . . ., and
R
K
. Let c
1
, c
2
, . . ., and c
K
be K points in feature space.
Then y R
j
iff
f
W
(y, c
j
) < f
W
(y, c
i
) for i = j.
(4)
In the case of ties, y is assigned to the region with the lowest
index. Thus, the K-tuple of points C = (c
1
, c
2
, . . . , c
K
) defines
a partition in feature space. The partition induced by
the points in C is similar in nature to a Voronoi tessellation.
We use the notation f
W,C
whenever we use the set C to
parameterize the regions used in the dissimilarity measure.
2.2
Objective Function for Clustering
The goal of the present work is to identify the spatially
varying dissimilarity measure and the associated compact
clusters simultaneously. It is worth mentioning here that,
as in the case of any clustering algorithm, the underlying
assumption in this paper is the existence of such a dissimilarity
measure and clusters for a given data set.
Let x
1
, x
2
, . . ., and x
n
be n given data points. Let K be
a given positive integer. Assuming that C represents the
cluster centers, let us assign each data point x
i
to a cluster
R
j
with the closest c
j
as the cluster center
2
, i.e.,
j = arg min
l
f
W,C
(x
i
, c
l
).
(5)
Then, the within-cluster dissimilarity is given by
J (W, C) =
K
j=1
x
i
R
j
M
l=1
w
r
jl
g
l
(x
i
, c
j
).
(6)
J (W, C) represents the sum of the dissimilarity measures of
all the data points from their closest centroids. The objective
is to find W and C that minimize J (W, C). To avoid
the trivial solution to J (W, C), we consider a normalization
condition on w
j
, viz.,
M
l=1
w
jl
= 1.
(7)
Note that even with this condition, J (W, C) has a trivial
solution: w
jp
= 1 where p = arg min
l
x
i
R
j
g
l
(x
i
, c
j
),
and the remaining weights are zero. One way to avoid convergence
of w
j
to unit vectors is to impose a regularization
condition on w
j
. We consider the following two regularization
measures in this paper: (1) Entropy measure:
M
l=1
w
jl
log(w
jl
) and (2) Gini measure:
M
l=1
w
2
jl
.
2
We use P = {R
1
, R
2
, . . . , R
K
} to represent the corresponding
partition of the data set as well. The intended interpretation
(cluster or region) would be evident from the context.
612
Research Track Poster
ALGORITHMS TO LEARN SVAD MEASURES
The problem of determining the optimal W and C is similar
to the traditional clustering problem that is solved by
the K-Means Algorithm (KMA) except for the additional W
matrix. We propose a class of iterative algorithms similar to
KMA. These algorithms start with a random partition of the
data set and iteratively update C, W and P so that J (W, C)
is minimized. These iterative algorithms are instances of Alternating
Optimization (AO) algorithms. In [1], it is shown
that AO algorithms converge to a local optimum under some
conditions. We outline the algorithm below before actually
describing how to update C, W and P in every iteration.
Randomly assign the data points to K clusters.
REPEAT
Update C: Compute the centroid of each cluster c
j
.
Update W : Compute the w
jl
j, l.
Update P: Reassign the data points to the clusters.
UNTIL (termination condition is reached).
In the above algorithm, the update of C depends on the
definition of g
i
, and the update of W on the regularization
terms. The update of P is done by reassigning the data
points according to (5). Before explaining the computation
of C in every iteration for various g
i
, we first derive update
equations for W for various regularization measures.
3.1
Update of Weights
While updating weights, we need to find the values of
weights that minimize the objective function for a given C
and P. As mentioned above, we consider the two regularization
measures for w
jl
and derive update equations. If we
consider the entropy regularization with r = 1, the objective
function becomes:
J
EN T
(W, C)
=
K
j=1
x
i
R
j
M
l=1
w
jl
g
l
(x
i
, c
j
)
+
K
j=1
j
M
l=1
w
jl
log(w
jl
) +
K
j=1
j
M
l=1
w
jl
- 1
.
(8)
Note that
j
are the Lagrange multipliers corresponding
to the normalization constraints in (7), and
j
represent
the relative importance given to the regularization term
relative to the within-cluster dissimilarity. Differentiating
J
EN T
(W, C) with respect to w
jl
and equating it to zero, we
obtain w
jl
= exp
-(
j
+
x
i Rj
g
l
(
x
i
,
c
j
))
j
- 1 . Solving for
j
by substituting the above value of w
jl
in (7) and substituting
the value of
j
back in the above equation, we obtain
w
jl
=
exp x
i
R
j
g
l
(x
i
, c
j
)/
j
M
n=1
exp x
i
R
j
g
n
(x
i
, c
j
)/
j
.
(9)
If we consider the Gini measure for regularization with
r = 2, the corresponding w
jl
that minimizes the objective
function can be shown to be
w
jl
=
1/(
j
+
x
i
R
j
g
l
(x
i
, c
j
))
M
n=1
(1/(
j
+
x
i
R
j
g
n
(x
i
, c
j
))) .
(10)
In both cases, the updated value of w
jl
is inversely related
Algorithm
Update Equations
Acronyms
P
C
W
EEnt
(5)
(11)
(9)
EsGini
(5)
(11)
(10)
CEnt
(5)
(12)
(9)
CsGini
(5)
(12)
(10)
Table 1: Summary of algorithms.
to
x
i
R
j
g
l
(x
i
, c
j
). This has various interpretations based
on the nature of g
l
. For example, when we consider the ESVaD
measure, w
jl
is inversely related to the variance of l-th
element of the data vectors in the j-th cluster. In other
words, when the variance along a particular dimension is
high in a cluster, then the dimension is less important to
the cluster. This popular heuristic has been used in various
contexts (such as relevance feedback) in the literature [9].
Similarly, when we consider the CSVaD measure, w
jl
is directly
proportional to the correlation of the j-th dimension
in the l-th cluster.
3.2
Update of Centroids
Learning ESVaD Measures: Substituting the ESVaD measure
in the objective function and solving the first order
necessary conditions, we observe that
c
jl
=
1
|R
j
| x
i
R
j
x
il
(11)
minimizes J
ESV AD
(W, C).
Learning CSVaD Measures: Let x
il
= w
jl
x
il
, then using
the Cauchy-Swartz inequality, it can be shown that
c
jl
=
1
|R
j
| x
i
R
j
x
il
(12)
maximizes
x
i
R
j
d
l=1
w
jl
x
il
c
jl
. Hence, (12) also minimizes
the objective function when CSVaD is used as the
dissimilarity measure.
Table 1 summarizes the update equations used in various
algorithms. We refer to this set of algorithms as SVaD
learning algorithms.
EXPERIMENTS
In this section, we present an experimental study of the algorithms
described in the previous sections. We applied the
proposed algorithms on various text data sets and compared
the performance of EEnt and EsGini with that of K-Means,
CSCAD and DGK algorithms. The reason for choosing the
K-Means algorithm (KMA) apart from CSCAD and DGK
is that it provides a baseline for assessing the advantages of
feature weighting. KMA is also a popular algorithm for text
clustering. We have included a brief description of CSCAD
and DGK algorithms in Appendix A.
Text data sets are sparse and high dimensional. We consider
standard labeled document collections and test the
proposed algorithms for their ability to discover dissimilarity
measures that distinguish one class from another without
actually considering the class labels of the documents. We
measure the success of the algorithms by the purity of the
regions that they discover.
613
Research Track Poster
4.1
Data Sets
We performed our experiments on three standard data
sets: 20 News Group, Yahoo K1, and Classic 3. These data
sets are described below.
20 News Group
3
: We considered different subsets of 20
News Group data that are known to contain clusters of varying
degrees of separation [10]. As in [10], we considered three
random samples of three subsets of the 20 News Group data.
The subsets denoted by Binary has 250 documents each
from talk.politics.mideast and talk.politics.misc. Multi5 has
100 documents each from comp.graphics, rec.motorcycles,
rec.sport.baseball, sci.space, and talk.politics.mideast. Finally
, Multi10 has 50 documents each from alt.atheism, comp.
sys.mac.hardware, misc.forsale, rec.autos, rec.sport.hockey,
sci.crypt, sci.electronics, sci.med, sci.space, and talk.politics.
gun. It may be noted that Binary data sets have two highly
overlapping classes. Each of Multi5 data sets has samples
from 5 distinct classes, whereas Multi10 data sets have only
a few samples from 10 different classes. The size of the vocabulary
used to represent the documents in Binary data set
is about 4000, Multi5 about 3200 and Multi10 about 2800.
We observed that the relative performance of the algorithms
on various samples of Binary, Multi5 and Multi10 data sets
was similar. Hence, we report results on only one of them.
Yahoo K1
4
: This data set contains 2340 Reuters news
articles downloaded from Yahoo in 1997.
There are 494
from Health, 1389 from Entertainment, 141 from Sports, 114
from Politics, 60 from Technology and 142 from Business.
After preprocessing, the documents from this data set are
represented using 12015 words. Note that this data set has
samples from 6 different classes. Here, the distribution of
data points across the class is uneven, ranging from 60 to
1389.
Classic 3
5
: Classic 3 data set contains 1400 aerospace
systems abstracts from the Cranfield collection, 1033 medical
abstracts from the Medline collection and 1460 information
retrieval abstracts from the Cisi collection, making up
3893 documents in all. After preprocessing, this data set
has 4301 words. The points are almost equally distributed
among the three distinct classes.
The data sets were preprocessed using two major steps.
First, a set of words (vocabulary) is extracted and then each
document is represented with respect to this vocabulary.
Finding the vocabulary includes: (1) elimination of the standard
list of stop words from the documents, (2) application
of Porter stemming
6
for term normalization, and (3) keeping
only the words which appear in at least 3 documents. We
represent each document by the unitized frequency vector.
4.2
Evaluation of Algorithms
We use the accuracy measure to compare the performance
of various algorithms. Let a
ij
represent the number of data
points from class i that are in cluster j. Then the accuracy
of the partition is given by
j
max
i
a
ij
/n where n is the
total number of data points.
It is to be noted that points coming from a single class
need not form a single cluster.
There could be multiple
3
http://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20
.tar.gz
4
ftp://ftp.cs.umn.edu/dept/users/boley/PDDPdata/doc-K
5
ftp://ftp.cs.cornell.edu/pub/smart
6
http://www.tartarus.org/~martin/PorterStemmer/
Iteration
0
1
2
3
4
5
J (W, C)
334.7
329.5
328.3
328.1
327.8
Accuracy
73.8
80.2
81.4
81.6
82
82
Table 2: Evolution of J (W, C) and Accuracies with
iterations when EEnt applied on a Multi5 data.
clusters in a class that represent sub-classes. We study the
performance of SVaD learning algorithms for various values
of K, i.e., the number of clusters.
4.3
Experimental Setup
In our implementations, we have observed that the proposed
algorithms, if applied on randomly initialized centroids
, show unstable behavior. One reason for this behavior
is that the number of parameters that are estimated in
feature-weighting clustering algorithms is twice as large as
that estimated by the traditional KMA. We, therefore, first
estimate the cluster centers giving equal weights to all the
dimensions using KMA and then fine-tune the cluster centers
and the weights using the feature-weighting clustering
algorithms. In every iteration, the new sets of weights are
updated as follows. Let w
n
(t+1) represent the weights com-puted
using one of (9), (10), (14) or (15) in iteration (t + 1)
and w(t) the weights in iteration t. Then, the weights in
iteration (t + 1) are
w(t + 1) = (1 - (t))w(t) + (t)w
n
(t + 1),
(13)
where (t) [0, 1] decreases with t. That is, (t) = (t 1
), for a given constant [0, 1]. In our experiments, we
observed that the variance of purity values for different initial
values of (0) and above 0.5 is very small. Hence, we
report the results for (0) = 0.5 and = 0.5. We set the
value of
j
= 1.
It may be noted that when the documents are represented
as unit vectors, KMA with the cosine dissimilarity measure
and Euclidean distance measure would yield the same clusters
. This is essentially the same as Spherical K-Means algorithms
described in [3]. Therefore, we consider only the
weighted Euclidean measure and restrict our comparisons to
EEnt and EsGini in the experiments.
Since the clusters obtained by KMA are used to initialize
all other algorithms considered here, and since the results
of KMA are sensitive to initialization, the accuracy numbers
reported in this section are averages over 10 random
initializations of KMA.
4.4
Results and Observations
4.4.1
Effect of SVaD Measures on Accuracies
In Table 2, we show a sample run of EEnt algorithm on
one of the Multi5 data sets. This table shows the evolution
of J (W, C) and the corresponding accuracies of the clusters
with the iterations. The accuracy, shown at iteration 0, is
that of the clusters obtained by KMA. The purity of clusters
increases with decrease in the value of the objective function
defined using SVaD measures. We have observed a similar
behavior of EEnt and EsGini on other data sets also. This
validates our hypothesis that SVaD measures capture the
underlying structure in the data sets more accurately.
614
Research Track Poster
4.4.2
Comparison with Other Algorithms
Figure 1 to Figure 5 show average accuracies of various
algorithms on the 5 data sets for various number of clusters
. The accuracies of KMA and DGK are very close to
each other and hence, in the figures, the lines corresponding
to these algorithms are indistinguishable. The lines corresponding
to CSCAD are also close to that of KMA in all the
cases except Class 3.
General observations: The accuracies of SVaD algorithms
follow the trend of the accuracies of other algorithms.
In all our experiments, both SVaD learning algorithms improve
the accuracies of clusters obtained by KMA. It is observed
in our experiments that the improvement could be
as large as 8% in some instances. EEnt and EsGini consis-tently
perform better than DGK on all data sets and for all
values of K. EEnt and EsGini perform better than CSCAD
on all data sets excepts in the case of Classic 3 and for a few
values of K.
Note that the weight update equation of CSCAD (15)
may result in negative values of w
jl
. Our experience with
CSCAD shows that it is quite sensitive to initialization and
it may have convergence problems. In contrast, it may be
observed that w
jl
in (9) and (10) are always positive. Moreover
, in our experience, these two versions are much less
sensitive to the choice of
j
.
Data specific observations: When K = 2, EEnt and
EsGini could not further improve the results of KMA on the
Binary data set. The reason is that the data set contains
two highly overlapping classes. However, for other values of
K, they marginally improve the accuracies.
In the case of Multi5, the accuracies of the algorithms are
non-monotonic with K. The improvement of accuracies is
large for intermediate values of K and small for extreme
values of K. When K = 5, KMA finds relatively stable
clusters.
Hence, SVaD algorithms are unable to improve
the accuracies as much as they did for intermediate values
of K. For larger values of K, the clusters are closely spaced
and hence there is little scope for improvement by the SVaD
algorithms.
Multi10 data sets are the toughest to cluster because of
the large number of classes present in the data. In this case,
the accuracies of the algorithms are monotonically increasing
with the number of clusters. The extent of improvement
of accuracies of SVaD algorithms over KMA is almost constant
over the entire range of K. This reflects the fact that
the documents in Multi10 data set are uniformly distributed
over feature space.
The distribution of documents in Yahoo K1 data set is
highly skewed. The extent of improvements that the SVaD
algorithms could achieve decrease with K. For higher values
of K, KMA is able to find almost pure sub-clusters, resulting
in accuracies of about 90%. This leaves little scope for
improvement.
The performance of CSCAD differs noticeably in the case
of Classic 3. It performs better than the SVaD algorithms
for K = 3 and better than EEnt for K = 9. However, for
larger values of K, the SVaD algorithms perform better than
the rest. As in the case of Multi5, the improvements of SVaD
algorithms over others are significant and consistent. One
may recall that Multi5 and Classic 3 consist of documents
from distinct classes.
Therefore, this observation implies
that when there are distinct clusters in the data set, KMA
yields confusing clusters when the number of clusters is over-Figure
1: Accuracy results on Binary data.
Figure 2: Accuracy results on Multi5 data.
specified. In this scenario, EEnt and EsGini can fine-tune
the clusters to improve their purity.
SUMMARY AND CONCLUSIONS
We have defined a general class of spatially variant dissimilarity
measures and proposed algorithms to learn the measure
underlying a given data set in an unsupervised learning
framework.
Through our experiments on various textual
data sets, we have shown that such a formulation of dissimilarity
measure can more accurately capture the hidden
structure in the data than a standard Euclidean measure
that does not vary over feature space. We have also shown
that the proposed learning algorithms perform better than
other similar algorithms in the literature, and have better
stability properties.
Even though we have applied these algorithms only to
text data sets, the algorithms derived here do not assume
any specific characteristics of textual data sets. Hence, they
Figure 3: Accuracy results on Multi10 data.
615
Research Track Poster
Figure 4: Accuracy results on Yahoo K1 data.
Figure 5: Accuracy results on Classic 3 data.
are applicable to general data sets. Since the algorithms
perform better for larger K, it would be interesting to investigate
whether they can be used to find subtopics of a
topic. Finally, it will be interesting to learn SVaD measures
for labeled data sets.
REFERENCES
[1] J. C. Bezdek and R. J. Hathaway. Some notes on
alternating optimization. In Proceedings of the 2002
AFSS International Conference on Fuzzy Systems.
Calcutta, pages 288300. Springer-Verlag, 2002.
[2] A. P. Dempster, N. M. Laird, and Rubin. Maximum
likelihood from incomplete data via the EM algorithm.
Journal Royal Statistical Society B, 39(2):138, 1977.
[3] I. S. Dhillon and D. S. Modha. Concept
decompositions for large sparse text data using
clustering. Machine Learning, 42(1):143175, January
2001.
[4] E. Diday and J. C. Simon. Cluster analysis. In K. S.
Fu, editor, Pattern Recognition, pages 4794.
Springer-Verlag, 1976.
[5] H. Frigui and O. Nasraoui. Simultaneous clustering
and attribute discrimination. In Proceedings of
FUZZIEEE, pages 158163, San Antonio, 2000.
[6] H. Frigui and O. Nasraoui. Simultaneous
categorization of text documents and identification of
cluster-dependent keywords. In Proceedings of
FUZZIEEE, pages 158163, Honolulu, Hawaii, 2001.
[7] D. E. Gustafson and W. C. Kessel. Fuzzy clustering
with the fuzzy covariance matrix. In Proccedings of
IEEE CDC, pages 761766, San Diego, California,
1979.
[8] R. Krishnapuram and J. Kim. A note on fuzzy
clustering algorithms for Gaussian clusters. IEEE
Transactions on Fuzzy Systems, 7(4):453461, Aug
1999.
[9] Y. Rui, T. S. Huang, and S. Mehrotra. Relevance
feedback techniques in interactive content-based image
retrieval. In Storage and Retrieval for Image and
Video Databases (SPIE), pages 2536, 1998.
[10] N. Slonim and N. Tishby. Document clustering using
word clusters via the information bottleneck method.
In Proceedings of SIGIR, pages 208215, 2000.
APPENDIX
A.
OTHER FEATURE WEIGHTING
CLUSTERING TECHNIQUES
A.1
Diagonal Gustafson-Kessel (DGK)
Gustafson and Kessel [7] associate each cluster with a different
norm matrix. Let A = (A
1
, . . . , A
k
) be the set of k
norm matrices associated with k clusters. Let u
ji
is the fuzzy
membership of x
i
in cluster j and U = [u
ji
]. By restricting
A
j
s to be diagonal and u
ji
{0, 1}, we can reformulate the
original optimization problem in terms of SVaD measures as
follows:
min
C,W
J
DGK
(C, W ) =
k
j=1
x
i
R
j
M
l=1
w
jl
g
l
(x
i
, c
j
),
subject to
l
w
jl
=
j
. Note that this problem can be solved
using the same AO algorithms described in Section 3. Here,
the update for C and P would remain the same as that
discussed in Section 3. It can be easily shown that when
j
= 1, j,
w
jl
=
M
m=1
x
i
R
j
g
m
(x
i
, c
j
)
1/M
x
i
R
j
g
l
(x
i
, c
j
)
(14)
minimize J
DGK
for a given C.
A.2
Crisp Simultaneous Clustering and
Attribute Discrimination (CSCAD)
Frigui et.
al.
in [5, 6], considered a fuzzy version of
the feature-weighting based clustering problem (SCAD). To
make a fair comparison of our algorithms with SCAD, we derive
its crisp version and refer to it as Crisp SCAD (CSCAD).
In [5, 6], the Gini measure is used for regularization. If the
Gini measure is considered with r = 1, the weights w
jl
that
minimize the corresponding objective function for a given C
and P, are given by
w
jl
= 1
M +
1
2
j
1
M
M
n=1
x
i
R
j
g
n
(x
i
, c
j
) x
i
R
j
g
l
(x
i
, c
j
)
.
(15)
Since SCAD uses the weighted Euclidean measure, the update
equations of centroids in CSCAD remain the same as in
(11). The update equation for w
jl
in SCAD is quite similar
to (15). One may note that, in (15), the value of w
jl
can
become negative. In [5], a heuristic is used to estimate the
value
j
in every iteration and set the negative values of w
jl
to zero before normalizing the weights.
616
Research Track Poster | Clustering;feature weighting;spatially varying dissimilarity (SVaD);Learning Dissimilarity Measures;clustering;dissimilarity measure |
126 | Learning the Unified Kernel Machines for Classification | Kernel machines have been shown as the state-of-the-art learning techniques for classification. In this paper, we propose a novel general framework of learning the Unified Kernel Machines (UKM) from both labeled and unlabeled data. Our proposed framework integrates supervised learning, semi-supervised kernel learning, and active learning in a unified solution. In the suggested framework, we particularly focus our attention on designing a new semi-supervised kernel learning method, i.e., Spectral Kernel Learning (SKL), which is built on the principles of kernel target alignment and unsupervised kernel design. Our algorithm is related to an equivalent quadratic programming problem that can be efficiently solved. Empirical results have shown that our method is more effective and robust to learn the semi-supervised kernels than traditional approaches. Based on the framework, we present a specific paradigm of unified kernel machines with respect to Kernel Logistic Regresions (KLR), i.e., Unified Kernel Logistic Regression (UKLR). We evaluate our proposed UKLR classification scheme in comparison with traditional solutions. The promising results show that our proposed UKLR paradigm is more effective than the traditional classification approaches. | INTRODUCTION
Classification is a core data mining technique and has been
actively studied in the past decades. In general, the goal of
classification is to assign unlabeled testing examples with a
set of predefined categories. Traditional classification methods
are usually conducted in a supervised learning way, in
which only labeled data are used to train a predefined classification
model. In literature, a variety of statistical models
have been proposed for classification in the machine learning
and data mining communities. One of the most popular
and successful methodologies is the kernel-machine techniques
, such as Support Vector Machines (SVM) [25] and
Kernel Logistic Regressions (KLR) [29]. Like other early
work for classification, traditional kernel-machine methods
are usually performed in the supervised learning way, which
consider only the labeled data in the training phase.
It is obvious that a good classification model should take
advantages on not only the labeled data, but also the unlabeled
data when they are available. Learning on both labeled
and unlabeled data has become an important research
topic in recent years. One way to exploit the unlabled data
is to use active learning [7]. The goal of active learning is
to choose the most informative example from the unlabeled
data for manual labeling. In the past years, active learning
has been studied for many classification tasks [16].
Another emerging popular technique to exploit unlabeled
data is semi-supervised learning [5], which has attracted
a surge of research attention recently [30].
A variety of
machine-learning techniques have been proposed for semi-supervised
learning, in which the most well-known approaches
are based on the graph Laplacians methodology [28, 31, 5].
While promising results have been popularly reported in
this research topic, there is so far few comprehensive semi-supervised
learning scheme applicable for large-scale classification
problems.
Although supervised learning, semi-supervised learning
and active learning have been studied separately, so far
there is few comprehensive scheme to combine these techniques
effectively together for classification tasks. To this
end, we propose a general framework of learning the Unified
Kernel Machines (UKM) [3, 4] by unifying supervised
kernel-machine learning, semi-supervised learning, unsupervised
kernel design and active learning together for large-scale
classification problems.
The rest of this paper is organized as follows. Section 2 reviews
related work of our framework and proposed solutions.
Section 3 presents our framework of learning the unified ker-187
Research Track Paper
nel machines. Section 4 proposes a new algorithm of learning
semi-supervised kernels by Spectral Kernel Learning (SKL).
Section 5 presents a specific UKM paradigm for classification
, i.e., the Unified Kernel Logistic Regression (UKLR).
Section 6 evaluates the empirical performance of our proposed
algorithm and the UKLR classification scheme. Section
7 sets out our conclusion.
RELATED WORK
Kernel machines have been widely studied for data classification
in the past decade.
Most of earlier studies on
kernel machines usually are based on supervised learning.
One of the most well-known techniques is the Support Vector
Machines, which have achieved many successful stories
in a variety of applications [25].
In addition to SVM, a
series of kernel machines have also been actively studied,
such as Kernel Logistic Regression [29], Boosting [17], Regularized
Least-Square (RLS) [12] and Minimax Probability
Machines (MPM) [15], which have shown comparable performance
with SVM for classification. The main theoretical
foundation behind many of the kernel machines is the theory
of regularization and reproducing kernel Hilbert space
in statistical learning [17, 25]. Some theoretical connections
between the various kernel machines have been explored in
recent studies [12].
Semi-supervised learning has recently received a surge of
research attention for classification [5, 30]. The idea of semi-supervised
learning is to use both labeled and unlabeled data
when constructing the classifiers for classification tasks. One
of the most popular solutions in semi-supervised learning
is based on the graph theory [6], such as Markov random
walks [22], Gaussian random fields [31], Diffusion models [13]
and Manifold learning [2]. They have demonstrated some
promising results on classification.
Some recent studies have begun to seek connections between
the graph-based semi-supervised learning and the kernel
machine learning. Smola and Kondor showed some theoretical
understanding between kernel and regularization based
on the graph theory [21]. Belkin et al. developed a framework
for regularization on graphs and provided some analysis
on generalization error bounds [1]. Based on the emerging
theoretical connections between kernels and graphs, some
recent work has proposed to learn the semi-supervised kernels
by graph Laplacians [32]. Zhang et al. recently provided
a theoretical framework of unsupervised kernel design
and showed that the graph Laplacians solution can be considered
as an equivalent kernel learning approach [27]. All
of the above studies have formed the solid foundation for
semi-supervised kernel learning in this work.
To exploit the unlabeled data, another research attention
is to employ active learning for reducing the labeling efforts
in classification tasks. Active learning, or called pool-based
active learning, has been proposed as an effective technique
for reducing the amount of labeled data in traditional supervised
classification tasks [19]. In general, the key of active
learning is to choose the most informative unlabeled examples
for manual labeling.
A lot of active learning methods
have been proposed in the community. Typically they
measure the classification uncertainty by the amount of disagreement
to the classification model [9, 10] or measure the
distance of each unlabeled example away from the classification
boundary [16, 24].
FRAMEWORK OF LEARNING UNIFIED KERNEL MACHINES
In this section, we present the framework of learning the
unified kernel machines by combining supervised kernel machines
, semi-supervised kernel learning and active learning
techniques into a unified solution. Figure 1 gives an overview
of our proposed scheme. For simplicity, we restrict our discussions
to classification problems.
Let
M(K, ) denote a kernel machine that has some underlying
probabilistic model, such as kernel logistic regressions
(or support vector machines). In general, a kernel machine
contains two components, i.e., the kernel K (either a
kernel function or simply a kernel matrix), and the model parameters
. In traditional supervised kernel-machine learning
, the kernel K is usually a known parametric kernel function
and the goal of the learning task is usually to determine
the model parameter . This often limits the performance of
the kernel machine if the specified kernel is not appropriate.
To this end, we propose a unified scheme to learn the unified
kernel machines by learning on both the kernel K and
the model parameters together. In order to exploit the unlabeled
data, we suggest to combine semi-supervised kernel
learning and active learning techniques together for learning
the unified kernel machines effectively from the labeled
and unlabeled data. More specifically, we outline a general
framework of learning the unified kernel machine as follows.
Figure 1: Learning the Unified Kernel Machines
Let L denote the labeled data and U denote the unlabeled
data. The goal of the unified kernel machine learning task is
to learn the kernel machine
M(K
,
) that can classify the
data effectively. Specifically, it includes the following five
steps:
Step 1. Kernel Initialization
The first step is to initialize the kernel component K
0
of the kernel machine
M(K
0
,
0
). Typically, users can
specify the initial kernel K
0
(function or matrix) with a
stanard kernel. When some domain knowledge is ava-iable
, users can also design some kernel with domain
knowledge (or some data-dependent kernels).
Step 2. Semi-Supervised Kernel Learning
The initial kernel may not be good enough to classify
the data correctly. Hence, we suggest to employ
188
Research Track Paper
the semi-supervised kernel learning technique to learn
a new kernel K by engaging both the labeled L and
unlabled data U available.
Step 3. Model Parameter Estimation
When the kernel K is known, to estimate the parameters
of the kernel machines based on some model assumption
, such as Kernel Logistic Regression or Support
Vector Machines, one can simply employ the standard
supervised kernel-machine learning to solve the
model parameters .
Step 4. Active Learning
In many classification tasks, labeling cost is expensive.
Active learning is an important method to reduce human
efforts in labeling. Typically, we can choose a
batch of most informative examples S that can most effectively
update the current kernel machine
M(K, ).
Step 5. Convergence Evaluation
The last step is the convergence evaluation in which we
check whether the kernel machine is good enough for
the classification task. If not, we will repeat the above
steps until a satisfied kernel machine is acquired.
This is a general framework of learning unified kernel machines
. In this paper, we focus our main attention on the
the part of semi-supervised kernel learning technique, which
is a core component of learning the unified kernel machines.
SPECTRAL KERNEL LEARNING
We propose a new semi-supervised kernel learning method,
which is a fast and robust algorithm for learning semi-supervised
kernels from labeled and unlabeled data. In the following
parts, we first introduce the theoretical motivations and then
present our spectral kernel learning algorithm. Finally, we
show the connections of our method to existing work and
justify the effectiveness of our solution from empirical observations
.
4.1
Theoretical Foundation
Let us first consider a standard supservisd kernel learning
problem. Assume that the data (X, Y ) are drawn from
an unknown distribution
D. The goal of supervised learning
is to find a prediction function p(X) that minimizes the
following expected true loss:
E
(X,Y )D
L(p(X), Y ),
where E
(X,Y )D
denotes the expectation over the true underlying
distribution
D. In order to achieve a stable esimia-tion
, we usually need to restrict the size of hypothesis function
family. Given l training examples (x
1
,y
1
),. . .,(x
l
,y
l
),
typically we train a predition function ^
p in a reproducing
Hilbert space
H by minimizing the empirical loss [25]. Since
the reproducing Hilbert space can be large, to avoid over-fitting
problems, we often consider a regularized method as
follow:
^
p = arg inf
pH
1
l
l
i=1
L(p(x
i
), y
i
) +
||p||
2
H
,
(1)
where is a chosen positive regularization parameter. It
can be shown that the solution of (1) can be represented as
the following kernel method:
^
p(x) =
l
i=1
^
i
k(x
i
, x)
= arg inf
R
l
1
n
l
i=1
L (p(x
i
), y
i
) +
l
i,j=1
i
j
k(x
i
, x
j
)
,
where
is a parameter vector to be estimated from the
data and k is a kernel, which is known as kernel function
. Typically a kernel returns the inner product between
the mapping images of two given data examples, such that
k(x
i
, x
j
) = (x
i
), (x
j
) for x
i
, x
j
X .
Let us now consider a semi-supervised learning setting.
Given labeled data
{(x
i
, y
i
)
}
l
i=1
and unlabeled data
{x
j
}
n
j=l+1
,
we consider to learn the real-valued vectors f
R
m
by the
following semi-supervised learning method:
^
f = arg inf
f R
1
n
n
i=1
L(f
i
, y
i
) + f K
-1
f
,
(2)
where K is an m
m kernel matrix with K
i,j
= k(x
i
, x
j
).
Zhang et al. [27] proved that the solution of the above semi-supervised
learning is equivelent to the solution of standard
supervised learning in (1), such that
^
f
j
= ^
p(x
j
)
j = 1, . . . , m.
(3)
The theorem offers a princple of unsuperivsed kernel design
: one can design a new kernel
k(
, ) based on the unlabeled
data and then replace the orignal kernel k by
k in the
standard supervised kernel learning. More specifically, the
framework of spectral kernel design suggests to design the
new kernel matrix
K by a function g as follows:
K =
n
i=1
g(
i
)v
i
v
i
,
(4)
where (
i
, v
i
) are the eigen-pairs of the original kernel matrix
K, and the function g(
) can be regarded as a filter function
or a transformation function that modifies the spectra
of the kernel. The authors in [27] show a theoretical justification
that designing a kernel matrix with faster spectral decay
rates should result in better generalization performance,
which offers an important pricinple in learning an effective
kernel matrix.
On the other hand, there are some recent papers that
have studied theoretical principles for learning effective kernel
functions or matrices from labeled and unlabeled data.
One important work is the kernel target alignment, which
can be used not only to assess the relationship between the
feature spaces by two kernels, but also to measure the similarity
between the feature space by a kernel and the feature
space induced by labels [8]. Specifically, given two kernel
matrices K
1
and K
2
, their relationship is defined by the
following score of alignment:
Definition 1. Kernel Alignment: The empirical alignment
of two given kernels K
1
and K
2
with respect to the
sample set S is the quantity:
^
A(K
1
, K
2
) =
K
1
, K
2 F
K
1
, K
1 F
K
2
, K
2 F
(5)
189
Research Track Paper
where K
i
is the kernel matrix induced by the kernel k
i
and
, is the Frobenius product between two matrices, i.e.,
K
1
, K
2 F
=
n
i,j=1
k
1
(x
i
, x
j
)k
2
(x
i
, x
j
).
The above definition of kernel alignment offers a principle
to learn the kernel matrix by assessing the relationship
between a given kernel and a target kernel induced by the
given labels. Let y =
{y
i
}
l
i=1
denote a vector of labels in
which y
i
{+1, -1} for binary classification. Then the target
kernel can be defined as T = yy . Let K be the kernel
matrix with the following structure
K =
K
tr
K
trt
K
trt
K
t
(6)
where K
ij
= (x
i
), (x
j
) , K
tr
denotes the matrix part of
"train-data block" and K
t
denotes the matrix part of "test-data
block."
The theory in [8] provides the principle of learning the
kernel matrix, i.e., looking for a kernel matrix K with good
generalization performance is equivalent to finding the matrix
that maximizes the following empirical kernel alignment
score:
^
A(K
tr
, T ) =
K
tr
, T
F
K
tr
, K
tr F
T, T
F
(7)
This principle has been used to learn the kernel matrices
with multiple kernel combinations [14] and also the semi-supervised
kernels from graph Laplacians [32]. Motivated by
the related theorecial work, we propose a new spectral kernel
learning (SKL) algorithm which learns spectrals of the
kernel matrix by obeying both the principle of unsupervised
kernel design and the principle of kernel target alignment.
4.2
Algorithm
Assume that we are given a set of labeled data L =
{x
i
, y
i
}
l
i=1
, a set of unlabeled data U =
{x
i
}
n
i=l+1
, and
an initial kernel matrix K.
We first conduct the eigen-decomposition
of the kernel matrix:
K =
n
i=1
i
v
i
v
i
,
(8)
where (
i
, v
i
) are eigen pairs of K and are assumed in a
decreasing order, i.e.,
1
2
. . .
n
. For efficiency
consideration, we select the top d eigen pairs, such that
K
d
=
d
i=1
i
v
i
v
i
K ,
(9)
where the parameter d
n is a dimension cutoff factor that
can be determined by some criteria, such as the cumulative
eigen energy.
Based on the principle of unsupervised kernel design, we
consider to learn the kernel matrix as follows
K =
d
i=1
i
v
i
v
i
,
(10)
where
i
0 are spectral coefficients of the new kernel matrix
. The goal of spectral kernel learning (SKL) algorithm is
to find the optimal spectral coefficients
i
for the following
optimization
max
K,
^
A(
K
tr
, T )
(11)
subject to
K =
d
i=1
i
v
i
v
i
trace(
K) = 1
i
0,
i
C
i+1
, i = 1 . . . d
- 1 ,
where C is introduced as a decay factor that satisfies C
1,
v
i
are top d eigen vectors of the original kernel matrix K,
K
tr
is the kernel matrix restricted to the (labeled) training
data and T is the target kernel induced by labels.
Note
that C is introduced as an important parameter to control
the decay rate of spectral coefficients that will influence the
overall performance of the kernel machine.
The above optimization problem belongs to convex optimization
and is usually regarded as a semi-definite programming
problem (SDP) [14], which may not be computation-ally
efficient. In the following, we turn it into a Quadratic
Programming (QP) problem that can be solved much more
efficiently.
By the fact that the objective function (7) is invariant
to the constant term T, T
F
, we can rewrite the objective
function into the following form
K
tr
, T
F
K
tr
,
K
tr F
.
(12)
The above alignment is invariant to scales. In order to remove
the trace constraint in (11), we consider the following
alternative approach. Instead of maximizing the objective
function (12) directly, we can fix the numerator to 1 and
then minimize the denominator. Therefore, we can turn the
optimization problem into:
min
K
tr
,
K
tr F
(13)
subject to
K =
d
i=1
i
v
i
v
i
K
tr
, T
F
= 1
i
0,
i
C
i+1
, i = 1 . . . d
- 1 .
This minimization problem without the trace constraint is
equivalent to the original maximization problem with the
trace constraint.
Let vec(A) denote the column vectorization of a matrix A
and let D = [vec(V
1,tr
) . . . vec(V
d,tr
)] be a constant matrix
with size of l
2
d, in which the d matrices of V
i
= v
i
v
i
are
with size of l
l. It is not difficult to show that the above
problem is equivalent to the following optimization
min
||D||
(14)
subject to
vec(T ) D = 1
i
0
i
C
i+1
, i = 1 . . . d
- 1 .
Minimizing the norm is then equivalent to minimizing the
squared norm. Hence, we can obtain the final optimization
190
Research Track Paper
0
5
10
15
20
25
30
0.4
0.5
0.6
0.7
0.8
0.9
1
Dimension (d)
Cumulative Energy
(a) Cumulative eigen energy
0
5
10
15
20
25
30
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Dimension (d)
Scaled Coefficient
Original Kernel
SKL (C=1)
SKL (C=2)
SKL (C=3)
(b) Spectral coefficients
Figure 2: Illustration of cumulative eigen energy and the spectral coefficients of different decay factors on
the Ionosphere dataset. The initial kernel is a linear kernel and the number of labeled data is 20.
0
10
20
30
40
50
0.65
0.7
0.75
0.8
0.85
0.9
0.95
Dimension (d)
Accuracy
K
Origin
K
Trunc
K
Cluster
K
Spectral
(a) C=1
0
5
10
15
20
25
30
35
40
45
50
0.65
0.7
0.75
0.8
0.85
0.9
0.95
Dimension (d)
Accuracy
K
Origin
K
Trunc
K
Cluster
K
Spectral
(b) C=2
0
5
10
15
20
25
30
35
40
45
50
0.65
0.7
0.75
0.8
0.85
0.9
0.95
Dimension (d)
Accuracy
K
Origin
K
Trunc
K
Cluster
K
Spectral
(c) C=3
Figure 3: Classification performance of semi-supervised kernels with different decay factors on the Ionosphere
dataset. The initial kernel is a linear kernel and the number of labeled data is 20.
problem as
min
D D
subject to
vec(T ) D = 1
i
0
i
C
i+1
, i = 1 . . . d
- 1 .
This is a standard Quadratic Programming (QP) problem
that can be solved efficiently.
4.3
Connections and Justifications
The essential of our semi-supervised kernel learning method
is based on the theories of unsupervised kernel design and
kernel target alignment.
More specifically, we consider a
dimension-reduction effective method to learn the semi-supervised
kernel that maximizes the kernel alignment score. By examining
the work on unsupervised kernel design, the following
two pieces of work can be summarized as a special case of
spectral kernel learning framework:
Cluster Kernel
This method adopts a "[1. . . ,1,0,. . . ,0]" kernel that has
been used in spectral clustering [18]. It sets the top
spectral coefficients to 1 and the rest to 0, i.e.,
i
=
1
for
i
d
0
for
i > d
.
(15)
For a comparison, we refer to this method as "Cluster
kernel" denoted by K
Cluster
.
Truncated Kernel
Another method is called the truncated kernel that
keeps only the top d spectral coefficients
i
=
i
for
i
d
0
for
i > d
,
(16)
where
i
are the top eigen values of an initial kernel.
We can see that this is exactly the method of kernel
principal component analysis [20] that keeps only
the d most significant principal components of a given
kernel. For a comparison, we denote this method as
K
Trunc
.
191
Research Track Paper
0
5
10
15
20
25
30
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Dimension (d)
Scaled Coefficient
Original Kernel
SKL (C=1)
SKL (C=2)
SKL (C=3)
(a) Spectral coefficients
0
10
20
30
40
50
0.7
0.75
0.8
0.85
0.9
0.95
1
Dimension (d)
Accuracy
K
Origin
K
Trunc
K
Cluster
K
Spectral
(b) C=1
0
10
20
30
40
50
0.7
0.75
0.8
0.85
0.9
0.95
1
Dimension (d)
Accuracy
K
Origin
K
Trunc
K
Cluster
K
Spectral
(c) C=2
Figure 4:
Example of Spectral coefficients and performance impacted by different decay factors on the
Ionosphere dataset. The initial kernel is an RBF kernel and the number of labeled data is 20.
0
10
20
30
40
50
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
Dimension (d)
Accuracy
K
Origin
K
Trunc
K
Cluster
K
Spectral
(a) C=1
0
5
10
15
20
25
30
35
40
45
50
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
Dimension (d)
Accuracy
K
Origin
K
Trunc
K
Cluster
K
Spectral
(b) C=2
0
5
10
15
20
25
30
35
40
45
50
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
Dimension (d)
Accuracy
K
Origin
K
Trunc
K
Cluster
K
Spectral
(c) C=3
Figure 5: Classification performance of semi-supervised kernels with different decay factors on the Heart
dataset. The initial kernel is a linear kernel and the number of labeled data is 20.
In our case, in comparison with semi-supervised kernel
learning methods by graph Laplacians, our work is similar
to the approach in [32], which learns the spectral transformation
of graph Laplacians by kernel target alignment with
order constraints. However, we should emphasize two important
differences that will explain why our method can
work more effectively.
First, the work in [32] belongs to traditional graph based
semi-supervised learning methods which assume the kernel
matrix is derived from the spectral decomposition of graph
Laplacians.
Instead, our spectral kernel learning method
learns on any initial kernel and assume the kernel matrix is
derived from the spectral decomposition of the normalized
kernel.
Second, compared to the kernel learning method in [14],
the authors in [32] proposed to add order constraints into
the optimization of kernel target alignment [8] to enforce the
constraints of graph smoothness. In our case, we suggest
a decay factor C to constrain the relationship of spectral
coefficients in the optimization that can make the spectral
coefficients decay faster. In fact, if we ignore the difference
of graph Laplacians and assume that the initial kernel in our
method is given as K
L
-1
, we can see that the method
in [32] can be regarded as a special case of our method when
the decay factor C is set to 1 and the dimension cut-off
parameter d is set to n.
4.4
Empirical Observations
To argue that C = 1 in the spectral kernel learning algorithm
may not be a good choice for learning an effective
kernel, we illustrate some empirical examples to justifiy the
motivation of our spectral kernel learning algorithm. One
goal of our spectral kernel learning methodology is to attain
a fast decay rate of the spectral coefficients of the kernel
matrix. Figure 2 illustrates an example of the change of the
resulting spectral coefficients using different decay factors in
our spectral kernel learning algorithms. From the figure, we
can see that the curves with larger decay factors (C = 2, 3)
have faster decay rates than the original kernel and the one
using C = 1. Meanwhile, we can see that the cumulative
eigen energy score converges to 100% quickly when the number
of dimensions is increased. This shows that we may use
much small number of eigen-pairs in our semi-supervised
kernel learning algorithm for large-scale problems.
To examine more details in the impact of performance
with different decay factors, we evaluate the classification
192
Research Track Paper
performance of spectral kernel learning methods with different
decay factors in Figure 3. In the figure, we compare
the performance of different kernels with respect to spectral
kernel design methods. We can see that two unsupervised
kernels, K
Trunc
and K
Cluster
, tend to perform better than
the original kernel when the dimension is small. But their
performances are not very stable when the number of dimensions
is increased. For comparison, the spectral kernel
learning method achieves very stable and good performance
when the decay factor C is larger than 1. When the decay
factor is equal to 1, the performance becomes unstable due
to the slow decay rates observed from our previous results
in Figure 3. This observation matches the theoretical justification
[27] that a kernel with good performance usually
favors a faster decay rate of spectral coefficients.
Figure 4 and Figure 5 illustrate more empirical examples
based on different initial kernels, in which similar results
can be observed. Note that our suggested kernel learning
method can learn on any valid kernel, and different initial
kernels will impact the performance of the resulting spectral
kernels. It is usually helpful if the initial kernel is provided
with domain knowledge.
UNIFIED KERNEL LOGISTIC REGRESSION
In this section, we present a specific paradigm based on
the proposed framework of learning unified kernel machines.
We assume the underlying probabilistic model of the kernel
machine is Kernel Logistic Regression (KLR). Based on
the UKM framework, we develop the Unified Kernel Logistic
Regression (UKLR) paradigm to tackle classification
tasks. Note that our framework is not restricted to the KLR
model, but also can be widely extended for many other kernel
machines, such as Support Vector Machine (SVM) and
Regularized Least-Square (RLS) classifiers.
Similar to other kernel machines, such as SVM, a KLR
problem can be formulated in terms of a stanard regularized
form of loss+penalty in the reproducing kernel Hilbert space
(RKHS):
min
f H
K
1
l
l
i=1
ln(1 + e
-y
i
f (x
i
)
) +
2 ||f ||
2
H
K
,
(17)
where
H
K
is the RKHS by a kernel K and is a regularization
parameter. By the representer theorem, the optimal
f (x) has the form:
f (x) =
l
i=1
i
K(x, x
i
) ,
(18)
where
i
are model parameters. Note that we omit the constant
term in f (x) for simplified notations. To solve the
KLR model parameters, there are a number of available
techniques for effective solutions [29].
When the kernel K and the model parameters are available
, we use the following solution for active learning, which
is simple and efficient for large-scale problems. More specifically
, we measure the information entropy of each unlabeled
data example as follows
H(x; , K) =
N
C
i=1
p(C
i
|x)log(p(C
i
|x)) ,
(19)
Algorithm: Unified Kernel Logistic Regresssion
Input
K
0
: Initial normalized kernel
L: Set of labeled data
U: Set of unlabeled data
Repeat
Spectral Kernel Learning
K
Spectral Kernel(K
0
, L, U );
KLR Parameter Estimation
KLR Solver(L, K);
Convergence Test
If (converged), Exit Loop;
Active Learning
x
max
xU
H(x; , K)
L
L {x
}, U U - {x
}
Until converged.
Output
UKLR = M(K, ).
Figure 6: The UKLR Algorithm.
where N
C
is the number of classes and C
i
denotes the i
th
class and p(C
i
|x) is the probability of the data example x
belonging to the i
th
class which can be naturally obtained
by the current KLR model (, K). The unlabeled data examples
with maximum values of entropy will be considered
as the most informative data for labeling.
By unifying the spectral kernel learning method proposed
in Section 3, we summarize the proposed algorithm of Unified
Kernel Logistic Regression (UKLR) in Figure 6. In the
algorithm, note that we can usually initialize a kernel by a
standard kernel with appropriate parameters determined by
cross validation or by a proper deisgn of the initial kernel
with domain knowledge.
EXPERIMENTAL RESULTS
We discuss our empirical evaluation of the proposed framework
and algorithms for classification. We first evaluate the
effectiveness of our suggested spectral kernel learning algorithm
for learning semi-supervised kernels and then compare
the performance of our unified kernel logistic regression
paradigm with traditional classification schemes.
6.1
Experimental Testbed and Settings
We use the datasets from UCI machine learning repository
1
. Four datasets are employed in our experiments. Table
1 shows the details of four UCI datasets in our experiments
.
For experimental settings, to examine the influences of
different training sizes, we test the compared algorithms on
four different training set sizes for each of the four UCI
datasets. For each given training set size, we conduct 20
random trials in which a labeled set is randomly sampled
1
www.ics.uci.edu/ mlearn/MLRepository.html
193
Research Track Paper
Table 1: List of UCI machine learning datasets.
Dataset
#Instances
#Features
#Classes
Heart
270
13
2
Ionosphere
351
34
2
Sonar
208
60
2
Wine
178
13
3
from the whole dataset and all classes must be present in
the sampled labeled set.
The rest data examples of the
dataset are then used as the testing (unlabeled) data. To
train a classifier, we employ the standard KLR model for
classification. We choose the bounds on the regularization
parameters via cross validation for all compared kernels to
avoid an unfair comparison. For multi-class classification,
we perform one-against-all binary training and testing and
then pick the class with the maximum class probability.
6.2
Semi-Supervised Kernel Learning
In this part, we evaluate the performance of our spectral
kernel learning algorithm for learning semi-supervised kernels
. We implemented our algorithm by a standard Matlab
Quadratic Programming solver (quadprog). The dimension-cut
parameter d in our algorithm is simply fixed to 20 without
further optimizing. Note that one can easily determine
an appropriate value of d by examining the range of the
cumulative eigen energy score in order to reduce the com-putational
cost for large-scale problems. The decay factor
C is important for our spectral kernel learning algorithm.
As we have shown examples before, C must be a positive
real value greater than 1. Typically we favor a larger decay
factor to achieve better performance. But it must not be
set too large since the too large decay factor may result in
the overly stringent constraints in the optimization which
gives no solutions. In our experiments, C is simply fixed to
constant values (greater than 1) for the engaged datasets.
For a comparison, we compare our SKL algorithms with
the state-of-the-art semi-supervised kernel learning method
by graph Laplacians [32], which is related to a quadrati-cally
constrained quaratic program (QCQP). More specifically
, we have implemented two graph Laplacians based
semi-supervised kernels by order constraints [32]. One is the
order-constrained graph kernel (denoted as "Order") and
the other is the improved order-constrained graph kernel
(denoted as "Imp-Order"), which removes the constraints
from constant eigenvectors. To carry a fair comparison, we
use the top 20 smallest eigenvalues and eigenvectors from
the graph Laplacian which is constructed with 10-NN un-weighted
graphs. We also include three standard kernels for
comparisons.
Table 2 shows the experimental results of the compared
kernels (3 standard and 5 semi-supervised kernels) based on
KLR classifiers on four UCI datasets with different sizes of
labeled data. Each cell in the table has two rows: the upper
row shows the average testing set accruacies with standard
errors; and the lower row gives the average run time in seconds
for learning the semi-supervised kernels on a 3GHz
desktop computer. We conducted a paired t-test at significance
level of 0.05 to assess the statistical significance of the
test set accuracy results. From the experimental results,
we found that the two order-constrained based graph kernels
perform well in the Ionosphere and Wine datasets, but
they do not achieve important improvements on the Heart
and Sonar datasets. Among all the compared kernels, the
semi-supervised kernels by our spectral kernel learning algorithms
achieve the best performances. The semi-supervised
kernel initialized with an RBF kernel outperforms other kernels
in most cases. For example, in Ionosphere dataset, an
RBF kernel with 10 initial training examples only achieves
73.56% test set accuracy, and the SKL algorithm can boost
the accuracy significantly to 83.36%. Finally, looking into
the time performance, the average run time of our algorithm
is less than 10% of the previous QCQP algorithms.
6.3
Unified Kernel Logistic Regression
In this part, we evaluate the performance of our proposed
paradigm of unified kernel logistic regression (UKLR). As
a comparison, we implement two traditional classification
schemes: one is traditional KLR classification scheme that
is trained on randomly sampled labeled data, denoted as
"KLR+Rand." The other is the active KLR classification
scheme that actively selects the most informative examples
for labeling, denoted as "KLR+Active." The active learning
strategy is based on a simple maximum entropy criteria
given in the pervious section. The UKLR scheme is implemented
based on the algorithm in Figure 6.
For active learning evaluation, we choose a batch of 10
most informative unlabeled examples for labeling in each
trial of evaluations. Table 3 summarizes the experimental
results of average test set accuarcy performances on four
UCI datasets. From the experimental results, we can observe
that the active learning classification schems outperform
the randomly sampled classification schemes in most
cases. This shows the suggested simple active learning strategy
is effectiveness. Further, among all compared schemes,
the suggsted UKLR solution significantly outperforms other
classification approaches in most cases. These results show
that the unified scheme is effective and promising to integrate
traditional learning methods together in a unified solution
.
6.4
Discussions
Although the experimental results have shown that our
scheme is promising, some open issues in our current solution
need be further explored in future work. One problem to investigate
more effective active learning methods in selecting
the most informative examples for labeling. One solution to
this issue is to employ the batch mode active learning methods
that can be more efficient for large-scale classification
tasks [11, 23, 24]. Moreover, we will study more effective kernel
learning algorithms without the assumption of spectral
kernels. Further, we may examine the theoretical analysis
of generalization performance of our method [27]. Finally,
we may combine some kernel machine speedup techniques to
deploy our scheme efficiently for large-scale applications [26].
CONCLUSION
This paper presented a novel general framework of learning
the Unified Kernel Machines (UKM) for classification.
Different from traditional classification schemes, our UKM
framework integrates supervised learning, semi-supervised
learning, unsupervised kernel design and active learning in
a unified solution, making it more effective for classification
tasks. For the proposed framework, we focus our attention
on tackling a core problem of learning semi-supervised kernels
from labeled and unlabled data. We proposed a Spectral
194
Research Track Paper
Table 2: Classification performance of different kernels using KLR classifiers on four datasets. The mean
accuracies and standard errors are shown in the table. 3 standard kernels and 5 semi-supervised kernels are
compared. Each cell in the table has two rows. The upper row shows the test set accuracy with standard
error; the lower row gives the average time used in learning the semi-supervised kernels ("Order" and "Imp-Order"
kernels are sovled by SeDuMi/YALMIP package; "SKL" kernels are solved directly by the Matlab
quadprog function.
Train
Standard Kernels
Semi-Supervised Kernels
Size
Linear
Quadratic
RBF
Order
Imp-Order
SKL(Linear)
SKL(Quad)
SKL(RBF)
Heart
10
67.19 1.94
71.90 1.23
70.04 1.61
63.60 1.94
63.60 1.94
70.58 1.63
72.33 1.60
73.37 1.50
( 0.67 )
( 0.81 )
( 0.07 )
( 0.06 )
( 0.06 )
20
67.40 1.87
70.36 1.51
72.64 1.37
65.88 1.69
65.88 1.69
76.26 1.29
75.36 1.30
76.30 1.33
( 0.71 )
( 0.81 )
( 0.06 )
( 0.06 )
( 0.06 )
30
75.42 0.88
70.71 0.83
74.40 0.70
71.73 1.14
71.73 1.14
78.42 0.59
78.65 0.52
79.23 0.58
( 0.95 )
( 0.97 )
( 0.06 )
( 0.06 )
( 0.06 )
40
78.24 0.89
71.28 1.10
78.48 0.77
75.48 0.69
75.48 0.69
80.61 0.45
80.26 0.45
80.98 0.51
( 1.35 )
( 1.34 )
( 0.07 )
( 0.07 )
( 0.07 )
Ionosphere
10
73.71 1.27
71.30 1.70
73.56 1.91
71.86 2.79
71.86 2.79
75.53 1.75
71.22 1.82
83.36 1.31
( 0.90 )
( 0.87 )
( 0.05 )
( 0.05 )
( 0.05 )
20
75.62 1.24
76.00 1.58
81.71 1.74
83.04 2.10
83.04 2.10
78.78 1.60
80.30 1.77
88.55 1.32
( 0.87 )
( 0.79 )
( 0.05 )
( 0.06 )
( 0.05 )
30
76.59 0.82
79.10 1.46
86.21 0.84
87.20 1.16
87.20 1.16
82.18 0.56
83.08 1.36
90.39 0.84
( 0.93 )
( 0.97 )
( 0.05 )
( 0.05 )
( 0.05 )
40
77.97 0.79
82.93 1.33
89.39 0.65
90.56 0.64
90.56 0.64
83.26 0.53
87.03 1.02
92.14 0.46
( 1.34 )
( 1.38 )
( 0.05 )
( 0.04 )
( 0.04 )
Sonar
10
63.01 1.47
62.85 1.53
60.76 1.80
59.67 0.89
59.67 0.89
64.27 1.91
64.37 1.64
65.30 1.78
( 0.63 )
( 0.63 )
( 0.08 )
( 0.07 )
( 0.07 )
20
68.09 1.11
69.55 1.22
67.63 1.15
64.68 1.57
64.68 1.57
70.61 1.14
69.79 1.30
71.76 1.07
( 0.68 )
( 0.82 )
( 0.07 )
( 0.07 )
( 0.08 )
30
66.40 1.06
69.80 0.93
68.23 1.48
66.54 0.79
66.54 0.79
70.20 1.48
68.48 1.59
71.69 0.87
( 0.88 )
( 1.02 )
( 0.07 )
( 0.07 )
( 0.07 )
40
64.94 0.74
71.37 0.52
71.61 0.89
69.82 0.82
69.82 0.82
72.35 1.06
71.28 0.96
72.89 0.68
( 1.14 )
( 1.20 )
( 0.07 )
( 0.08 )
( 0.07 )
Wine
10
82.26 2.18
85.89 1.73
87.80 1.63
86.99 1.98
86.99 1.45
83.63 2.62
83.21 2.36
90.54 1.08
( 1.02 )
( 0.86 )
( 0.09 )
( 0.09 )
( 0.09 )
20
86.39 1.39
86.96 1.30
93.77 0.99
92.31 1.39
92.31 1.39
89.53 2.32
92.56 0.56
94.94 0.50
( 0.92 )
( 0.91 )
( 0.09 )
( 0.09 )
( 0.09 )
30
92.50 0.76
87.43 0.63
94.63 0.50
92.97 0.54
92.97 0.54
93.99 1.09
94.29 0.53
96.25 0.30
( 1.28 )
( 1.27 )
( 0.09 )
( 0.10 )
( 0.09 )
40
94.96 0.65
88.80 0.93
96.38 0.35
95.62 0.37
95.62 0.37
95.80 0.47
95.36 0.46
96.81 0.28
( 1.41 )
( 1.39 )
( 0.08 )
( 0.08 )
( 0.10 )
Kernel Learning (SKL) algorithm, which is more effective
and efficient for learning kernels from labeled and unlabeled
data. Under the framework, we developed a paradigm of
unified kernel machine based on Kernel Logistic Regression,
i.e., Unified Kernel Logistic Regression (UKLR). Empirical
results demonstrated that our proposed solution is more effective
than the traditional classification approaches.
ACKNOWLEDGMENTS
The work described in this paper was fully supported by
two grants, one from the Shun Hing Institute of Advanced
Engineering, and the other from the Research Grants Council
of the Hong Kong Special Administrative Region, China
(Project No. CUHK4205/04E).
REFERENCES
[1] M. Belkin and I. M. andd P. Niyogi. Regularization
and semi-supervised learning on large graphs. In
COLT, 2004.
[2] M. Belkin and P. Niyogi. Semi-supervised learning on
riemannian manifolds. Machine Learning, 2004.
[3] E. Chang, S. C. Hoi, X. Wang, W.-Y. Ma, and
M. Lyu. A unified machine learning framework for
large-scale personalized information management. In
The 5th Emerging Information Technology
Conference, NTU Taipei, 2005.
[4] E. Chang and M. Lyu. Unified learning paradigm for
web-scale mining. In Snowbird Machine Learning
Workshop, 2006.
[5] O. Chapelle, A. Zien, and B. Scholkopf.
Semi-supervised learning. MIT Press, 2006.
[6] F. R. K. Chung. Spectral Graph Theory. American
Mathematical Soceity, 1997.
[7] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active
learning with statistical models. In NIPS, volume 7,
pages 705712, 1995.
[8] N. Cristianini, J. Shawe-Taylor, and A. Elisseeff. On
kernel-target alignment. JMLR, 2002.
195
Research Track Paper
Table 3: Classification performance of different classification schemes on four UCI datasets.
The mean
accuracies and standard errors are shown in the table. "KLR" represents the initial classifier with the initial
train size; other three methods are trained with additional 10 random/active examples.
Train
Linear Kernel
RBF Kernel
Size
KLR
KLR+Rand
KLR+Active
UKLR
KLR
KLR+Rand
KLR+Active
UKLR
Heart
10
67.19 1.94
68.22 2.16
69.22 1.71
77.24 0.74
70.04 1.61
72.24 1.23
75.36 0.60
78.44 0.88
20
67.40 1.87
73.79 1.29
73.77 1.27
79.27 1.00
72.64 1.37
75.10 0.74
76.23 0.81
79.88 0.90
30
75.42 0.88
77.70 0.92
78.65 0.62
81.13 0.42
74.40 0.70
76.43 0.68
76.61 0.61
81.48 0.41
40
78.24 0.89
79.30 0.75
80.18 0.79
82.55 0.28
78.48 0.77
78.50 0.53
79.95 0.62
82.66 0.36
Ionosphere
10
73.71 1.27
74.89 0.95
75.91 0.96
77.31 1.23
73.56 1.91
82.57 1.78
82.76 1.37
90.48 0.83
20
75.62 1.24
77.09 0.67
77.51 0.66
81.42 1.10
81.71 1.74
85.95 1.30
88.22 0.78
91.28 0.94
30
76.59 0.82
78.41 0.79
77.91 0.77
84.49 0.37
86.21 0.84
89.04 0.66
90.32 0.56
92.35 0.59
40
77.97 0.79
79.05 0.49
80.30 0.79
84.49 0.40
89.39 0.65
90.55 0.59
91.83 0.49
93.89 0.28
Sonar
10
61.19 1.56
63.72 1.65
65.51 1.55
66.12 1.94
57.40 1.48
60.19 1.32
59.49 1.46
67.13 1.58
20
67.31 1.07
68.85 0.84
69.38 1.05
71.60 0.91
62.93 1.36
64.72 1.24
64.52 1.07
72.30 0.98
30
66.10 1.08
67.59 1.14
69.79 0.86
71.40 0.80
63.03 1.32
63.72 1.51
66.67 1.53
72.26 0.98
40
66.34 0.82
68.16 0.81
70.19 0.90
73.04 0.69
66.70 1.25
68.70 1.19
67.56 0.90
73.16 0.88
Wine
10
82.26 2.18
87.31 1.01
89.05 1.07
87.31 1.03
87.80 1.63
92.75 1.27
94.49 0.54
94.87 0.49
20
86.39 1.39
93.99 0.40
93.82 0.71
94.43 0.54
93.77 0.99
95.57 0.38
97.13 0.18
96.76 0.26
30
92.50 0.76
95.25 0.47
96.96 0.40
96.12 0.47
94.63 0.50
96.27 0.35
97.17 0.38
97.21 0.26
40
94.96 0.65
96.21 0.63
97.54 0.37
97.70 0.34
96.38 0.35
96.33 0.45
97.97 0.23
98.12 0.21
[9] S. Fine, R. Gilad-Bachrach, and E. Shamir. Query by
committee, linear separation and random walks.
Theor. Comput. Sci., 284(1):2551, 2002.
[10] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby.
Selective sampling using the query by committee
algorithm. Mach. Learn., 28(2-3):133168, 1997.
[11] S. C. Hoi, R. Jin, and M. R. Lyu. Large-scale text
categorization by batch mode active learning. In
WWW2006, Edinburg, 2006.
[12] S. B. C. M. J.A.K. Suykens, G. Horvath and
J. Vandewalle. Advances in Learning Theory:
Methods, Models and Applications. NATO Science
Series: Computer & Systems Sciences, 2003.
[13] R. Kondor and J. Lafferty. Diffusion kernels on graphs
and other discrete structures. 2002.
[14] G. Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui,
and M. Jordan. Learning the kernel matrix with
semi-definite programming. JMLR, 5:2772, 2004.
[15] G. Lanckriet, L. Ghaoui, C. Bhattacharyya, and
M. Jordan. Minimax probability machine. In Advances
in Neural Infonation Processing Systems 14, 2002.
[16] R. Liere and P. Tadepalli. Active learning with
committees for text categorization. In Proceedings 14th
Conference of the American Association for Artificial
Intelligence (AAAI), pages 591596, MIT Press, 1997.
[17] R. Meir and G. Ratsch. An introduction to boosting
and leveraging. In In Advanced Lectures on Machine
Learning (LNAI2600), 2003.
[18] A. Ng, M. Jordan, and Y. Weiss. On spectral
clustering: Analysis and an algorithm. In In Advances
in Neural Information Processing Systems 14, 2001.
[19] N. Roy and A. McCallum. Toward optimal active
learning through sampling estimation of error
reduction. In 18th ICML, pages 441448, 2001.
[20] B. Scholkopf, A. Smola, and K.-R. Muller. Nonlinear
component analysis as a kernel eigenvalue problem.
Neural Computation, 10:12991319, 1998.
[21] A. Smola and R. Kondor. Kernels and regularization
on graphs. In Intl. Conf. on Learning Theory, 2003.
[22] M. Szummer and T. Jaakkola. Partially labeled
classification with markov random walks. In Advances
in Neural Information Processing Systems, 2001.
[23] S. Tong and E. Chang. Support vector machine active
learning for image retrieval. In Proc ACM Multimedia
Conference, pages 107118, New York, 2001.
[24] S. Tong and D. Koller. Support vector machine active
learning with applications to text classification. In
Proc. 17th ICML, pages 9991006, 2000.
[25] V. N. Vapnik. Statistical Learning Theory. John Wiley
& Sons, 1998.
[26] G. Wu, Z. Zhang, and E. Y. Chang. Kronecker
factorization for speeding up kernel machines. In
SIAM Int. Conference on Data Mining (SDM), 2005.
[27] T. Zhang and R. K. Ando. Analysis of spectral kernel
design based semi-supervised learning. In NIPS, 2005.
[28] D. Zhou, O. Bousquet, T. Lal, J. Weston, and
B. Schlkopf. Learning with local and global
consistency. In NIPS'16, 2005.
[29] J. Zhu and T. Hastie. Kernel logistic regression and
the import vector machine. In NIPS 14, pages
10811088, 2001.
[30] X. Zhu. Semi-supervised learning literature survey.
Technical report, Computer Sciences TR 1530,
University of Wisconsin - Madison, 2005.
[31] X. Zhu, Z. Ghahramani, and J. Lafferty.
Semi-supervised learning using gaussian fields and
harmonic functions. In Proc. ICML'2003, 2003.
[32] X. Zhu, J. Kandola, Z. Ghahramani, and J. Lafferty.
Nonparametric transforms of graph kernels for
semi-supervised learning. In NIPS2005, 2005.
196
Research Track Paper
| Active Learning;data mining;classification;unified kernel machine(UKM);Kernel Machines;spectral kernel learning (SKL);Kernel Logistic Regressions;Supervised Learning;supervised learning;Semi-Supervised Learning;active learning;Classification;Unsuper-vised Kernel Design;framework;Spectral Kernel Learning;semi-supervised kernel learning |
127 | Leo: A System for Cost Effective 3D Shaded Graphics | A physically compact, low cost, high performance 3D graphics accelerator is presented. It supports shaded rendering of triangles and antialiased lines into a double-buffered 24-bit true color frame buffer with a 24-bit Z-buffer. Nearly the only chips used besides standard memory parts are 11 ASICs (of four types). Special geometry data reformatting hardware on one ASIC greatly speeds and simplifies the data input pipeline. Floating-point performance is enhanced by another ASIC: a custom graphics microprocessor, with specialized graphics instructions and features. Screen primitive rasterization is carried out in parallel by five drawing ASICs, employing a new partitioning of the back-end rendering task. For typical rendering cases, the only system performance bottleneck is that intrinsically imposed by VRAM. | INTRODUCTION
To expand the role of 3D graphics in the mainstream computer industry
, cost effective, physically small, usable performance 3D
shaded graphics architectures must be developed. For such systems,
new features and sheer performance at any price can no longer be
the driving force behind the architecture; instead, the focus must be
on affordable desktop systems.
The historical approach to achieving low cost in 3D graphics systems
has been to compromise both performance and image quality.
But now, falling memory component prices are bringing nearly ideal
frame buffers into the price range of the volume market: double
buffered 24-bit color with a 24-bit Z-buffer. The challenge is to drive
these memory chips at their maximum rate with a minimum of supporting
rendering chips, keeping the total system cost and physical
size to an absolute minimum. To achieve this, graphics architectures
must be repartitioned to reduce chip count and internal bus sizes,
while still supporting existing 2D and 3D functionality.
This paper describes a new 3D graphics system, Leo, designed to
these philosophies. For typical cases, Leo's only performance limit
is that intrinsically imposed by VRAM. This was achieved by a
combination of new architectural techniques and advances in VLSI
technology. The result is a system without performance or image
quality compromises, at an affordable cost and small physical size.
The Leo board set is about the size of one and a half paperback novels
; the complete workstation is slightly larger than two copies of
Foley and Van Dam [7]. Leo supports both the traditional requirements
of the 2D X window system and the needs of 3D rendering:
shaded triangles, antialiased vectors, etc.
ARCHITECTURAL ALTERNATIVES
A generic pipeline for 3D shaded graphics is shown in Figure 1. ([7]
Chapter 18 is a good overview of 3D graphics hardware pipeline issues
.) This pipeline is truly generic, as at the top level nearly every
commercial 3D graphics accelerator fits this abstraction. Where individual
systems differ is in the partitioning of this rendering pipeline
, especially in how they employ parallelism. Two major areas
have been subject to separate optimization: the floating-point intensive
initial stages of processing up to, and many times including,
primitive set-up; and the drawing-intensive operation of generating
pixels within a primitive and Z-buffering them into the frame buffer.
For low end accelerators, only portions of the pixel drawing stages
of the pipeline are in hardware; the floating-point intensive parts of
the pipe are processed by the host in software. As general purpose
processors increase in floating-point power, such systems are starting
to support interesting rendering rates, while minimizing cost
[8]. But, beyond some limit, support of higher performance requires
dedicated hardware for the entire pipeline.
There are several choices available for partitioning the floating-point
intensive stages. Historically, older systems performed these
tasks in a serial fashion [2]. In time though, breaking the pipe into
more pieces for more parallelism (and thus performance) meant
that each section was devoting more and more of its time to I/O
overhead rather than to real work. Also, computational variance
meant that many portions of the pipe would commonly be idle
while others were overloaded. This led to the data parallel designs
of most recent 3D graphics architectures [12].
Leo: A System for Cost Effective
3D Shaded Graphics
Michael F Deering, Scott R Nelson
Sun Microsystems Computer Corporation
Here the concept is that multiple parallel computation units can
each process the entire floating-point intensive task, working in parallel
on different parts of the scene to be rendered. This allows each
pipe to be given a large task to chew on, minimizing handshake
overhead. But now there is a different load balancing problem. If
one pipe has an extra large task, the other parallel pipes may go idle
waiting for their slowest peer, if the common requirement of in-order
execution of tasks is to be maintained. Minor load imbalances
can be averaged out by adding FIFO buffers to the inputs and outputs
of the parallel pipes. Limiting the maximum size of task given
to any one pipe also limits the maximum imbalance, at the expense
of further fragmenting the tasks and inducing additional overhead.
But the most severe performance bottleneck lies in the pixel drawing
back-end. The most fundamental constraint on 3D computer
graphics architecture over the last ten years has been the memory
chips that comprise the frame buffer. Several research systems have
attempted to avoid this bottleneck by various techniques [10][4][8],
but all commercial workstation systems use conventional Z-buffer
rendering algorithms into standard VRAMs or DRAMs. How this
RAM is organized is an important defining feature of any high performance
rendering system.
LEO OVERVIEW
Figure 2 is a diagram of the Leo system. This figure is not just a
block diagram; it is also a chip level diagram, as every chip in the
system is shown in this diagram. All input data and window system
interactions enter through the LeoCommand chip. Geometry data is
reformatted in this chip before being distributed to the array of LeoFloat
chips below. The LeoFloat chips are microcoded specialized
DSP-like processors that tackle the floating-point intensive stages
of the rendering pipeline. The LeoDraw chips handle all screen
space pixel rendering and are directly connected to the frame buffer
RAM chips. LeoCross handles the back-end color look-up tables,
double buffering, and video timing, passing the final digital pixel
values to the RAMDAC.
Leo
Command
Leo
Float
Leo
Draw
Leo
Cross
RAM
DAC
Clock
Generator
Boot
PROM
SRAM
SRAM
SRAM
SRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
DRAM
DRAM
Leo
Draw
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
DRAM
DRAM
Leo
Draw
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
DRAM
DRAM
Leo
Draw
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
DRAM
DRAM
Leo
Draw
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
VRAM
DRAM
DRAM
Leo
Float
SRAM
SRAM
SRAM
SRAM
Leo
Float
SRAM
SRAM
SRAM
SRAM
Leo
Float
SRAM
SRAM
SRAM
SRAM
CF Bus
CD Bus
CX Bus (Subset of CD Bus)
Figure 2: The Leo Block Diagram. Every chip in the system is
represented in this diagram.
Video Output
SBus
Data Input
Transformation
Clip Test
Face Determination
Lighting
Clip (if needed)
Perspective Divide
Screen Space
Conversion
Set Up for
Incremental Render
Edge-Walk
Span-Interpolate
Z-Buffered Blend
VRAM Frame Buffer
Double Buffered MUX
Output Lookup Table
Digital to Analog
Conversion
Figure 1: Generic 3D Graphics Pipeline
Drawing
Floating-point
Intensive Functions
Intensive Functions
CD Bus
102
The development of the Leo architecture started with the constraints
imposed by contemporary VRAM technology. As will be
derived in the LeoDraw section below, these constraints led to the
partitioning of the VRAM controlling LeoDraw chips, and set a
maximum back-end rendering rate. This rate in turn set the performance
goal for LeoFloat, as well as the data input bandwidth and
processing rate for LeoCommand. After the initial partitioning of
the rendering pipeline into these chips, each chip was subjected to
additional optimization. Throughput bottlenecks in input geometry
format conversion, floating-point processing, and pixel rendering
were identified and overcome by adding reinforcing hardware to
the appropriate chips.
Leo's floating-point intensive section uses data parallel partitioning
. LeoCommand helps minimize load balancing problems by
breaking down rendering tasks to the smallest isolated primitives:
individual triangles, vectors, dots, portions of pixel rasters, rendering
attributes, etc., at the cost of precluding optimizations for
shared data in triangle strips and polylines. This was considered
acceptable due to the very low average strip length empirically
observed in real applications. The overhead of splitting geometric
data into isolated primitives is minimized by the use of dedicated
hardware for this task. Another benefit of converting all rendering
operations to isolated primitives is that down-stream processing of
primitives is considerably simplified by only needing to focus on
the isolated case.
INPUT PROCESSING LEO COMMAND
Feeding the pipe
Leo supports input of geometry data both as programmed I/O and
through DMA. The host CPU can directly store up to 32 data words
in an internal LeoCommand buffer without expensive read back
testing of input status every few words. This is useful on hosts that
do not support DMA, or when the host must perform format conversions
beyond those supported in hardware. In DMA mode, LeoCommand
employs efficient block transfer protocols on the system
bus to transfer data from system memory to its input buffer, allowing
much higher bandwidth than simple programmed I/O. Virtual
memory pointers to application's geometry arrays are passed directly
to LeoCommand, which converts them to physical memory
addresses without operating system intervention (except when a
page is marked as currently non-resident). This frees the host CPU
to perform other computations during the data transfer. Thus the
DMA can be efficient even for pure immediate-mode applications,
where the geometry is being created on the fly.
Problem: Tower of Babel of input formats
One of the problems modern display systems face is the explosion
of different input formats for similar drawing functions that need to
be supported. Providing optimized microcode for each format
rapidly becomes unwieldy. The host CPU could be used to pretrans-late
the primitive formats, but at high speeds this conversion operation
can itself become a system bottleneck. Because DMA completely
bypasses the host CPU, LeoCommand includes a programmable
format conversion unit in the geometry data pipeline. This
reformatter is considerably less complex than a general purpose
CPU, but can handle the most commonly used input formats, and at
very high speeds.
The geometry reformatting subsystem allows several orthogonal
operations to be applied to input data. This geometric input data is
abstracted as a stream of vertex packets. Each vertex packet may
contain any combination of vertex position, vertex normal, vertex
color, facet normal, facet color, texture map coordinates, pick IDs,
headers, and other information. One conversion supports arbitrary
re-ordering of data within a vertex, allowing a standardized element
order after reformatting. Another operation supports the conversion
of multiple numeric formats to 32-bit IEEE floating-point. The
source data can be 8-bit or 16-bit fixed-point, or 32-bit or 64-bit
IEEE floating-point. Additional miscellaneous reformatting allows
the stripping of headers and other fields, the addition of an internally
generated sequential pick ID, and insertion of constants. The
final reformatting stage re-packages vertex packets into complete
isolated geometry primitives (points, lines, triangles). Chaining bits
in vertex headers delineate which vertices form primitives.
Like some other systems, Leo supports a generalized form of triangle
strip (see Figure 3), where vertex header bits within a strip specify
how the incoming vertex should be combined with previous vertices
to form the next triangle. A stack of the last three vertices used
to form a triangle is kept. The three vertices are labeled oldest, middle
, and newest. An incoming vertex of type replace
_oldest causes
the oldest vertex to be replaced by the middle, the middle to be replaced
by the newest, and the incoming vertex becomes the newest.
This corresponds to a PHIGS PLUS triangle strip (sometimes called
a "zig-zag" strip). The replacement type replace
_middle leaves the
oldest vertex unchanged, replaces the middle vertex by the newest,
and the incoming vertex becomes the newest. This corresponds to a
triangle star. The replacement type restart marks the oldest and middle
vertices as invalid, and the incoming vertex becomes the newest.
Generalized triangle strips must always start with this code. A triangle
will be output only when a replacement operation results in three
valid vertices. Restart corresponds to a "move" operation in
polylines, and allows multiple unconnected variable-length triangle
strips to be described by a single data structure passed in by the user,
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
22
23
24
25
26
27
28
29
30
31
32
33
21
1 Restart
2 RO
3 RO
4 RO
5 RO
6 RO
7 Restart
8 RO
9 RO
10 RM
11 RM
12 RM
13 RM
14 RM
15 Restart
16 RO
17 RO
18 Restart
19 RO
20 RO
21 RO
22 Restart
23 RO
24 RO
25 RO
26 RO
27 RO
28 RO
29 RM
30 RM
31 RM
32 RM
33 RO
Triangle Strip
Triangle Star
Independent
Triangle
Independent
Quad
Mixed Strip
Figure 3: A Generalized Triangle Strip
Vertex Codes
RO = Replace Oldest
RM = Replace Middle
103
reducing the overhead. The generalized triangle strip's ability to effectively
change from "strip" to "star" mode in the middle of a strip
allows more complex geometry to be represented compactly, and requires
less input data bandwidth. The restart capability allows several
pieces of disconnected geometry to be passed in one DMA operation
. Figure 3 shows a single generalized triangle strip, and the
associated replacement codes. LeoCommand also supports header-less
strips of triangle vertices either as pure strips, pure stars, or pure
independent triangles.
LeoCommand hardware automatically converts generalized triangle
strips into isolated triangles. Triangles are normalized such that
the front face is always defined by a clockwise vertex order after
transformation. To support this, a header bit in each restart defines
the initial face order of each sub-strip, and the vertex order is reversed
after every replace
_oldest. LeoCommand passes each com-pleted
triangle to the next available LeoFloat chip, as indicated by
the input FIFO status that each LeoFloat sends back to LeoCommand
. The order in which triangles have been sent to each
LeoFloat is scoreboarded by LeoCommand, so that processed triangles
are let out of the LeoFloat array in the same order as they entered
. Non-sequential rendering order is also supported, but the
automatic rendering task distribution hardware works so well that
the performance difference is less than 3%. A similar, but less complex
vertex repackaging is supported for polylines and multi-polylines
via a move/draw bit in the vertex packet header.
To save IC pins and PC board complexity, the internal Leo data busses
connecting LeoCommand, LeoFloat, and LeoDraw are 16 bits in
size. When colors, normals, and texture map coefficients are being
transmitted on the CF-bus between LeoCommand and the LeoFloats
, these components are (optionally) compressed from 32-bit
IEEE floating-point into 16-bit fixed point fractions by LeoCommand
, and then automatically reconverted back to 32-bit IEEE
floating-point values by LeoFloat. This quantization does not effect
quality. Color components will eventually end up as 8-bit values in
the frame buffer. For normals, 16-bit (signed) accuracy represents a
resolution of approximately plus or minus an inch at one mile. This
optimization reduces the required data transfer bandwidth by 25%.
FLOATING-POINT PROCESSING LEO FLOAT
After canonical format conversion, the next stages of processing triangles
in a display pipeline are: transformation, clip test, face determination
, lighting, clipping (if required), screen space conversion,
and set-up. These operations are complex enough to require the use
of a general purpose processor.
Use of commercially available DSP (Digital Signal Processing)
chips for this work has two major drawbacks. First, most such processors
require a considerable number of surrounding glue chips,
especially when they are deployed as multi-processors. These glue
chips can easily quadruple the board area dedicated to the DSP
chip, as well as adversely affecting power, heat, cost, and reliability.
Second, few of these chips have been optimized for 3D graphics.
A better solution might be to augment the DSP with a special ASIC
that would replace all of these glue chips. Given the expense of developing
an ASIC, we decided to merge that ASIC with a custom
DSP core optimized for graphics.
The resulting chip was LeoFloat. LeoFloat combines a 32-bit mi-crocodable
floating-point core with concurrent input and output
packet communication subsystems (see Figure 4
.
), similar to the approach
of [3]. The only support chips required are four SRAM chips
for external microcode store. A number of specialized graphics instructions
and features make LeoFloat different from existing DSP
processors. Each individual feature only makes a modest incremen-tal
contribution to performance, and indeed many have appeared in
other designs. What is novel about LeoFloat is the combination of
features, whose cumulative effect leads to impressive overall system
performance. The following sections describe some of the
more important special graphics instructions and features.
Double buffered asynchronous I/O register files.
All input and
output commands are packaged up by separate I/O packet hardware.
Variable length packets of up to 32 32-bit words are automatically
written into (or out of) on-chip double-buffered register files (the I
and O registers). These are mapped directly into microcode register
space. Special instructions allow complete packets to be requested,
relinquished, or queued for transmission in one instruction cycle.
Enough internal registers.
Most commercial DSP chips support a
very small number of internal fast registers, certainly much smaller
than the data needed by the inner loops of most 3D pipeline algorithms
. They attempt to make up for this with on-chip SRAM or
data caches, but typically SRAMs are not multi-ported and the
caches not user-schedulable. We cheated with LeoFloat. We first
wrote the code for the largest important inner loop (triangles),
counted how many registers were needed (288), and built that many
into the chip.
Parallel internal function units
. The floating-point core functions
(32-bit IEEE format) include multiply, ALU, reciprocal, and integer
operations, all of which can often be executed in parallel. It is
particularly important that the floating-point reciprocal operation
not tie up the multiply and add units, so that perspective or slope
calculations can proceed in parallel with the rest of geometric processing
. Less frequently used reciprocal square root hardware is
shared with the integer function unit.
Put all non-critical algorithms on the host.
We avoided the necessity
of building a high level language compiler (and support instructions
) for LeoFloat by moving any code not worth hand coding in
microcode to the host processor. The result is a small, clean kernel
of graphics routines in microcode. (A fairly powerful macro-assembler
with a `C'-like syntax was built to support the hand coding.)
Software pipeline scheduling.
One of the most complex parts of
modern CPUs to design and debug is their scoreboard section,
which schedules the execution of instructions across multiple steps
in time and function units, presenting the programmer with the
Figure 4: LeoFloat arithmetic function units, registers and data paths.
I0 -I31
I0'-I31'
*
*
+
+
+
IALU
1/X
FALU
FMULT
Input from off-chip
Off-chip output
P0 -P31
P32-P63
P64-P91
R0 -R31
R32-R63
O0 -O31
O0'-O31'
104
illusion that individual instructions are executed in one shot. LeoFloat
avoided all this hardware by using more direct control fields,
like horizontal microprogrammable machines, and leaving it to the
assembler (and occasionally the programmer) to skew one logical
instruction across several physical instructions.
Special clip condition codes & clip branch.
For clip testing we
employ a modified Sutherland-Hodgman algorithm, which first
computes a vector of clip condition bits. LeoFloat has a clip test instruction
that computes these bits two at a time, shifting them into
a special clip-bits register. After the bits have been computed, special
branch instructions decode these bits into the appropriate case:
clip rejected, clip accepted, single edge clip (six cases), or needs
general clipping. There are separate branch instructions for triangles
and vectors. (A similar approach was taken in [9].) The branch
instructions allow multiple other conditions to be checked at the
same time, including backfacing and model clipping.
Register Y sort instruction.
The first step of the algorithm we used
for setting up triangles for scan conversion sorts the three triangle
vertices in ascending Y order. On a conventional processor this requires
either moving a lot of data, always referring to vertex data
through indirect pointers, or replicating the set-up code for all six
possible permutations of triangle vertex order. LeoFloat has a special
instruction that takes the results of the last three comparisons and reorders
part of the R register file to place vertices in sorted order.
Miscellaneous.
LeoFloat contains many performance features tra-ditionally
found on DSP chips, including an internal subroutine
stack, block load/store SRAM, and integer functions. Also there is
a "kitchen sink" instruction that initiates multiple housekeeping
functions in one instruction, such as "transmit current output packet
(if not clip pending), request new input packet, extract op-code and
dispatch to next task."
Code results: equivalent to 150 megaflop DSP.
Each 25 MHz
LeoFloat processes the benchmark isolated triangle (including clip-test
and set-up) in 379 clocks. (With a few exceptions, microcode
instructions issue at a rate of one per clock tick.) The same graphics
algorithm was tightly coded on several RISC processors and DSP
chips (SPARC, i860, C30, etc.), and typically took on the order of
1100 clocks. Thus the 379 LeoFloat instruction at 25 MHz do the
equivalent work of a traditional DSP chip running at 75 MHz (even
though there are only 54 megaflops of hardware). Of course these
numbers only hold for triangles and vectors, but that's most of what
LeoFloat does. Four LeoFloats assure that floating-point processing
is not the bottleneck for 100-pixel isolated, lighted triangles.
SCREEN SPACE RENDERING: LEO DRAW
VRAM limits
Commercial VRAM chips represent a fundamental constraint on
the possible pixel rendering performance of Leo's class of graphics
accelerator. The goal of the Leo architecture was to ensure to the
greatest extent possible that this was the only performance limit for
typical rendering operations.
The fundamental memory transaction for Z-buffered rendering
algorithms is a conditional read-modify-write cycle. Given an XY
address and a computed RGBZ value, the old Z value at the XY address
is first read, and then if the computed Z is in front of the old
Z, the computed RGBZ value is written into the memory. Such
transactions can be mapped to allowable VRAM control signals in
many different ways: reads and writes may be batched, Z may be
read out through the video port, etc.
VRAM chips constrain system rendering performance in two ways.
First, they impose a minimum cycle time per RAM bank for the Z-buffered
read-modify-write cycle. Figure 5 is a plot of this cycle
time (when in "page" mode) and its changes over a half-decade
period. VRAMs also constrain the ways in which a frame buffer can
be partitioned into independently addressable banks. Throughout
the five year period in Figure 5, three generations of VRAM technology
have been organized as 256K by 4, 8, and 16-bit memories. For
contemporary display resolutions of 1280
1024, the chips comprising
a minimum frame buffer can be organized into no more than
five separately-addressed interleave banks. Combining this information
, a theoretical maximum rendering speed for a primitive can be
computed. The second line in Figure 5 is the corresponding performance
for rendering 100-pixel Z-buffered triangles, including the
overhead for entering page mode, content refresh, and video shift
register transfers (video refresh). Higher rendering rates are only
possible if additional redundant memory chips are added, allowing
for higher interleaving factors, at the price of increased system cost.
Even supporting five parallel interleaves has a cost: at least 305
memory interface pins (five banks of (24 RGB + 24 Z + 13 address/
control)) are required, more pins than it is currently possible to dedicate
to a memory interface on one chip. Some systems have used
external buffer chips, but on a minimum cost and board area system
, this costs almost as much as additional custom chips. Thus, on
the Leo system we opted for five separate VRAM control chips
(LeoDraws).
Triangle scan conversion
Traditional shaded triangle scan conversion has typically been via
a linear pipeline of edge-walking followed by scan interpolation
[12]. There have been several approaches to achieving higher
throughput in rasterization. [2] employed a single edge-walker, but
parallel scan interpolation. [4][10] employed massively parallel
rasterizers. [6] and other recent machines use moderately parallel
rasterizers, with additional logic to merge the pixel rasterization
streams back together.
In the Leo design we chose to broadcast the identical triangle specification
to five parallel rendering chips, each tasked with rendering
only those pixels visible in the local interleave. Each chip performs
its own complete edge-walk and span interpolation of the triangle,
biased by the chip's local interleave. By paying careful attention to
proper mathematical sampling theory for rasterized pixels, the five
90
91
92
93
94
180 ns
160 ns
140 ns
260K
220K
200K
240K
200 ns
100 pixel triangle
theoretical
maximum
render rate
VRAM minimum
Z-buffer RGB
read/modify/write
cycle time (on page)
(off page = 1.5x)
Figure 5: VRAM cycle time and theoretical maximum triangle
rendering rate (for five-way interleaved frame buffers).
VRAM Cycle T
ime
T
riangle Rendering Rate
1 Meg VRAM
2 Meg VRAM
4 Meg
105
chips can act in concert to produce the correct combined rasterized
image. Mathematically, each chip thinks it is rasterizing the triangle
into an image memory with valid pixel centers only every five original
pixels horizontally, with each chip starting off biased one more
pixel to the right.
To obtain the speed benefits of parallel chips, most high performance
graphics systems have split the edge-walk and span-interpolate
functions into separate chips. But an examination of the relative
amounts of data flow between rendering pipeline stages shows that
the overall peak data transfer bandwidth demand occurs between
the edge-walk and span-interpolate sections, induced by long thin
triangles, which commonly occur in tessellated geometry. To minimize
pin counts and PC board bus complexity, Leo decided to replicate
the edge-walking function into each of the five span-interpolation
chips.
One potential drawback of this approach is that the edge-walking
section of each LeoDraw chip will have to advance to the next scan
line up to five times more often than a single rasterization chip
would. Thus LeoDraw's edge-walking circuit was designed to operate
in one single pixel cycle time (160 ns. read-modify-write VRAM
cycle), so it would never hold back scan conversion. Other usual
pipelining techniques were used, such as loading in and buffering
the next triangle to be drawn in parallel with rasterizing the current
triangle. Window clipping, blending, and other pixel post processing
are handled in later pipelined stages.
Line scan conversion
As with triangles, the mathematics of the line rasterization algorithms
were set up to allow distributed rendering of aliased and
antialiased lines and dots, with each LeoDraw chip handling the
1/5 of the frame buffer pixels that it owns. While the Leo system
uses the X11 semantics of Bresenham lines for window system
operations, these produce unacceptable motion artifacts in 3D
wireframe rendering. Therefore, when rendering 3D lines, Leo
employs a high-accuracy DDA algorithm, using 32 bits internally
for sufficient subpixel precision.
At present there is no agreement in the industry on the definition of a
high quality antialiased line. We choose to use the image quality of
vector strokers of years ago as our quality standard, and we tested different
algorithms with end users, many of whom were still using cal-ligraphic
displays. We found users desired algorithms that displayed
no roping, angle sensitivities, short vector artifacts, or end-point artifacts
. We submitted the resulting antialiased line quality test patterns
as a GPC [11] test image. In achieving the desired image quality level
, we determined several properties that a successful line antialias-ing
algorithm must have. First, the lines must have at least three pixels
of width across the minor axis. Two-pixel wide antialiased lines
exhibit serious roping artifacts. Four-pixel wide lines offer no visible
improvement except for lines near 45 degrees. Second, proper endpoint
ramps spread over at least two pixels are necessary both for
seamless line segment joins as well as for isolated line-ends. Third,
proper care must be taken when sampling lines of subpixel length to
maintain proper final intensity. Fourth, intensity or filter adjustments
based on the slope are necessary to avoid artifacts when rotating
wireframe images. To implement all this, we found that we needed at
least four bits of subpixel positional accuracy after cumulative interpolation
error is factored in. That is why we used 32 bits for XY coordinate
accuracy: 12 for pixel location, 4 for subpixel location, and
16 for DDA interpolation error. (The actual error limit is imposed by
the original, user-supplied 32-bit IEEE floating-point data.)
Because of the horizontal interleaving and preferred scan direction,
the X-major and Y-major aliased and antialiased line rasterization
algorithms are not symmetric, so separate optimized algorithms
were employed for each.
Antialiased dots
Empirical testing showed that only three bits of subpixel precision
are necessary for accurate rendering of antialiased dots. For ASIC
implementation, this was most easily accomplished using a brute-force
table lookup of one of 64 precomputed 3
3 pixel dot images.
These images are stored in on-chip ROM, and were generated using
a circular symmetric Gaussian filter.
Triangle, line, and dot hardware
Implementation of the triangle and antialiased vector rasterization
algorithms require substantial hardware resources. Triangles need
single pixel cycle edge-walking hardware in parallel with RGBZ
span interpolation hardware. To obtain the desired quality of antialiased
vectors, our algorithms require hardware to apply multiple
waveform shaping functions to every generated pixel. As a result,
the total VLSI area needed for antialiased vectors is nearly as large
as for triangles. To keep the chip die size reasonable, we reformu-lated
both the triangle and antialiased vector algorithms to combine
and reuse the same function units. The only difference is how the
separate sequencers set up the rasterization pipeline.
Per-pixel depth cue
Depth cueing has long been a heavily-used staple of wireframe applications
, but in most modern rendering systems it is an extra time
expense feature, performed on endpoints back in the floating-point
section. We felt that we were architecting Leo not for benchmarks,
but for users, and many wireframe users want to have depth cueing
on all the time. Therefore, we built a parallel hardware depth cue
function unit into each LeoDraw. Each triangle, vector, or dot rendered
by Leo can be optionally depth cued at absolutely no cost in
performance. Another benefit of per-pixel depth cueing is full compliance
with the PHIGS PLUS depth cueing specification. For Leo,
per-pixel depth cueing hardware also simplifies the LeoFloat microcode
, by freeing the LeoFloats from ever having to deal with it.
Picking support
Interactive graphics requires not only the rapid display of geometric
data, but also interaction with that data: the ability to pick a particular
part or primitive within a part. Any pixels drawn within the
bounds of a 3D pick aperture result in a pick hit, causing the current
pick IDs to be automatically DMAed back to host memory.
Window system support
Many otherwise sophisticated 3D display systems become somewhat
befuddled when having to deal simultaneously with 3D rendering
applications and a 2D window system. Modern window systems
on interactive workstations require frequent context switching
of the rendering pipeline state. Some 3D architectures have tried to
minimize the overhead associated with context switching by supporting
multiple 3D contexts in hardware. Leo goes one step further
, maintaining two completely separate pipelines in hardware:
one for traditional 2D window operations; the other for full 3D rendering
. Because the majority of context switch requests are for 2D
window system operations, the need for more complex 3D pipeline
context switching is significantly reduced. The 2D context is much
lighter weight and correspondingly easier to context switch. The
two separate graphics pipelines operate completely in parallel, allowing
simultaneous access by two independent CPUs on a multi-processor
host.
2D functionality abstracts the frame buffer as a 1-bit, 8-bit, or 24-bit
pixel array. Operations include random pixel access, optimized character
cell writes, block clear, block copy, and the usual menagerie of
106
boolean operations, write masks, etc. Vertical block moves are special
cased, as they are typically used in vertical scrolling of text
windows, and can be processed faster than the general block move
because the pixel data does not have to move across LeoDraw chip
interleaves. Rendering into non-rectangular shaped windows is
supported by special clip hardware, resulting in no loss in performance
. A special block clear function allows designated windows
(and their Z-buffers) to be initialized to any given constant in under
200 microseconds. Without this last feature, 30 Hz or faster animation
of non-trivial objects would have been impossible.
7 VIDEO OUTPUT: LEO CROSS
Leo's standard video output format is 1280
1024 at 76 Hz refresh
rate, but it also supports other resolutions, including 1152
900,
interlaced 640
480 RS-170 (NTSC), interlaced 768
576 PAL
timing, and 960
680 113 Hz field sequential stereo. LeoCross
contains several color look-up tables, supporting multiple pseudo
color maps without color map flashing. The look-up table also supports
two different true color abstractions: 24-bit linear color
(needed by rendering applications), and REC-709 non-linear color
(required by many imaging applications).
Virtual reality support
Stereo output is becoming increasingly important for use in Virtual
Reality applications. Leo's design goals included support for the
Virtual Holographic Workstation system configuration described in
[5]. Leo's stereo resolution was chosen to support square pixels, so
that lines and antialiased lines are displayed properly in stereo, and
standard window system applications can co-exist with stereo. Stereo
can be enabled on a per-window basis (when in stereo mode windows
are effectively quad-buffered). Hooks were included in LeoCross
to support display technologies other than CRT's, that may be
needed for head-mounted virtual reality displays.
8 NURBS AND TEXTURE MAP SUPPORT/b>
One of the advantages to using programmable elements within a
graphics accelerator is that additional complex functionality, such
as NURBS and texture mapping, can be accelerated. Texture mapping
is supported through special LeoFloat microcode and features
of LeoCommand. LeoFloat microcode also includes algorithms to
accelerate dynamic tessellation of trimmed NURBS surfaces. The
dynamic tessellation technique involves reducing trimmed NURBS
surfaces into properly sized triangles according to a display/pixel
space approximation criteria [1]; i.e. the fineness of tessellation is
view dependent. In the past, dynamic tessellation tended to be
mainly useful as a compression technique, to avoid storing all the
flattened triangles from a NURBS surface in memory. Dynamic tessellation
was not viewed as a performance enhancer, for while it
might generate only a third as many triangles as a static tessellation,
the triangles were generated at least an order of magnitude or more
slower than brute force triangle rendering. In addition it had other
problems, such as not handling general trimming. For many cases,
Leo's dynamic tesselator can generate and render triangles only a
small integer multiple slower than prestored triangle rendering,
which for some views, can result in faster overall object rendering.
9 RESULTS
Leo is physically a-two board sandwich, measuring 5.7
6.7
0.6
inches, that fits in a standard 2S SBus slot. Figure 6 is a photo of the
two boards, separated, showing all the custom ASICs. Figure 7 is a
photo of the complete Leo workstation, next to two of our units of
scale and the board set.
Leo can render 210K 100-pixel isolated, lighted, Gouraud shaded,
Z-buffered, depth cued triangles per second, with one infinite diffuse
and one ambient light source enabled. At 100 pixels, Leo is
still VRAM rendering speed limited; smaller triangles render faster.
Isolated 10-pixel antialiased, constant color, Z-buffered, depth cued
lines (which are actually 12 pixels long due to endpoint ramps, and
three pixels wide) render at a 422K per second rate. Corresponding
aliased lines render at 730K. Aliased and antialiased constant color,
Z-buffered, depth cued dots are clocked at 1100K. 24-bit image rasters
can be loaded onto the screen at a 10M pixel per second rate.
Screen scrolls, block moves, and raster character draws all also
have competitive performance. Figure 8 is a sample of shaded triangle
rendering.
10 SIMULATION
A system as complex as Leo cannot be debugged after the fact. All
the new rendering mathematics were extensively simulated before
being committed to hardware design. As each chip was defined, high,
medium, and low level simulators of its function were written and
continuously used to verify functionality and performance. Complete
images of simulated rendering were generated throughout the
course of the project, from within weeks of its start. As a result, the
window system and complex 3D rendering were up and running on
a complete board set within a week of receiving the first set of chips.
11 CONCLUSIONS
By paying careful attention to the forces that drive both performance
and cost, a physically compact complete 3D shaded graphics
accelerator was created. The focus was not on new rendering features
, but on cost reduction and performance enhancement of the
most useful core of 3D graphics primitives. New parallel algorithms
were developed to allow accurate screen space rendering of
primitives. Judicious use of hardware to perform some key traditional
software functions (such as format conversion and primitive
vertex reassembly) greatly simplified the microcode task. A specialized
floating-point core optimized for the primary task of processing
lines and triangles also supports more general graphics processing
, such as rasters and NURBS. The final system performance
is limited by the only chips not custom designed for Leo: the standard
RAM chips.
ACKNOWLEDGEMENTS
The authors would like to thank the entire Leo team for their efforts
in producing the system, and Mike Lavelle for help with the paper.
REFERENCES
1.
Abi-Ezzi, Salim, and L. Shirman.
Tessellation of Curved
Surfaces under Highly Varying Transformations. Proc. Euro-graphics
'91 (Vienna, Austria, September 1991), 385-397.
2.
Akeley, Kurt and T. Jermoluk.
High-Performance Polygon
Rendering, Proceedings of SIGGRAPH '88 (Atlanta, GA, Aug
1-5, 1988). In Computer Graphics 22, 4 (July 1988), 239-246.
3.
Anido, M., D. Allerton and E. Zaluska.
MIGS - A Multiprocessor
Image Generation System using RISC-like Micropro-cessors
. Proceedings of CGI '89 (Leeds, UK, June 1989),
Springer Verlag 1990.
4.
Deering, Michael, S. Winner, B. Schediwy, C. Duffy and N.
Hunt.
The Triangle Processor and Normal Vector Shader: A
VLSI system for High Performance Graphics. Proceedings of
SIGGRAPH '88 (Atlanta, GA, Aug 1-5, 1988). In Computer
Graphics 22, 4 (July 1988), 21-30.
107
5.
Deering, Michael.
High Resolution Virtual Reality. Proceedings
of SIGGRAPH '92 (Chicago, IL, July 26-31, 1992). In
Computer Graphics 26, 2 (July 1992), 195-202.
6.
Dunnett, Graham, M. White, P. Lister and R. Grimsdale.
The Image Chip for High Performance 3D Rendering. IEEE
Computer Graphics and Applications 12, 6 (November 1992),
41-52.
7.
Foley, James, A. van Dam, S. Feiner and J Hughes.
Computer
Graphics: Principles and Practice, 2nd ed., Addison-Wesley
, 1990.
8.
Kelley, Michael, S. Winner, K. Gould.
A Scalable Hardware
Render Accelerator using a Modified Scanline Algorithm.
Proceedings of SIGGRAPH '92 (Chicago, IL, July 26-31,
1992). In Computer Graphics 26, 2 (July 1992), 241-248.
9.
Kirk, David, and D. Voorhies.
The Rendering Architecture
of the DN10000VS. Proceedings of SIGGRAPH '90 (Dallas,
TX, August 6-10, 1990). In Computer Graphics 24, 4 (August
1990), 299-307.
10. Molnar, Steven, J. Eyles, J. Poulton.
PixelFlow: High-Speed
Rendering Using Image Composition. Proceedings of SIGGRAPH
'92 (Chicago, IL, July 26-31, 1992). In Computer
Graphics 26, 2 (July 1992), 231-240.
11. Nelson, Scott.
GPC Line Quality Benchmark Test. GPC Test
Suite, NCGA GPC committee 1991.
12. Torborg, John.
A Parallel Processor Architecture for Graphics
Arithmetic Operations. Proceedings of SIGGRAPH '87
(Anaheim, CA, July 27-31, 1987). In Computer Graphics 21,
4 (July 1987), 197-204.
Figure 8: Traffic Jam to Point Reyes. A scene containing 2,322,000 triangles, rendered by Leo Hardware. Sto-chastically
super-sampled 8 times. Models courtesy of Viewpoint Animation Engineering.
Figure 7: The complete SPARCstation ZX workstation, next to two
of our units of scale and the Leo board set.
Figure 6: The two boards, unfolded.
108 | input processing;3D graphics hardware;parallel algorithms;video output;general graphics processing;parallel graphics algorithms;small physical size;geometry data;3D shaded graphics;rendering;screen space rendering;antialiased lines;floating-point microprocessors;low cost;floating point processing;gouraud shading |
128 | Location based Indexing Scheme for DAYS | Data dissemination through wireless channels for broadcasting information to consumers is becoming quite common. Many dissemination schemes have been proposed but most of them push data to wireless channels for general consumption. Push based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server. Push based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast. Access latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme. Two of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes. None of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination. In this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD. We argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes. We prove our argument with the help of simulation results. | INTRODUCTION
Wireless data dissemination is an economical and efficient
way to make desired data available to a large number of mobile or
static users. The mode of data transfer is essentially asymmetric,
that is, the capacity of the transfer of data (downstream
communication) from the server to the client (mobile user) is
significantly larger than the client or mobile user to the server
(upstream communication). The effectiveness of a data
dissemination system is judged by its ability to provide user the
required data at anywhere and at anytime. One of the best ways to
accomplish this is through the dissemination of highly
personalized Location Based Services (LBS) which allows users
to access personalized location dependent data. An example
would be someone using their mobile device to search for a
vegetarian restaurant. The LBS application would interact with
other location technology components or use the mobile user's
input to determine the user's location and download the
information about the restaurants in proximity to the user by
tuning into the wireless channel which is disseminating LDD.
We see a limited deployment of LBS by some service
providers. But there are every indications that with time some of
the complex technical problems such as uniform location
framework, calculating and tracking locations in all types of
places, positioning in various environments, innovative location
applications, etc., will be resolved and LBS will become a
common facility and will help to improve market productivity and
customer comfort. In our project called DAYS, we use wireless
data broadcast mechanism to push LDD to users and mobile users
monitor and tune the channel to find and download the required
data. A simple broadcast, however, is likely to cause significant
performance degradation in the energy constrained mobile devices
and a common solution to this problem is the use of efficient air
indexing. The indexing approach stores control information which
tells the user about the data location in the broadcast and how and
when he could access it. A mobile user, thus, has some free time
to go into the doze mode which conserves valuable power. It also
allows the user to personalize his own mobile device by
selectively tuning to the information of his choice.
Access efficiency and energy conservation are the two issues
which are significant for data broadcast systems. Access efficiency
refers to the latency experienced when a request is initiated till the
response is received. Energy conservation [7, 10] refers to the
efficient use of the limited energy of the mobile device in
accessing broadcast data. Two parameters that affect these are the
tuning time and the access latency. Tuning time refers to the time
during which the mobile unit (MU) remains in active state to tune
the channel and download its required data. It can also be defined
as the number of buckets tuned by the mobile device in active
state to get its required data. Access latency may be defined as the
time elapsed since a request has been issued till the response has
been received.
1
This research was supported by a grant from NSF IIS-0209170.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise,
or republish, to post on servers or to redistribute to lists, requires prior
specific permission and/or a fee.
MobiDE'05, June 12, 2005, Baltimore, Maryland, USA.
Copyright 2005 ACM 1-59593-088-4/05/0006...$5.00.
17
Several indexing schemes have been proposed in the past and
the prominent among them are the tree based and the exponential
indexing schemes [17]. The main disadvantages of the tree based
schemes are that they are based on centralized tree structures. To
start a search, the MU has to wait until it reaches the root of the
next broadcast tree. This significantly affects the tuning time of
the mobile unit. The exponential schemes facilitate index
replication by sharing links in different search trees. For
broadcasts with large number of pages, the exponential scheme
has been shown to perform similarly as the tree based schemes in
terms of access latency. Also, the average length of broadcast
increases due to the index replication and this may cause
significant increase in the access latency. None of the above
indexing schemes is equally effective in broadcasting location
dependent data. In addition to providing low latency, they lack
properties which are used to address LDD issues. We propose an
indexing scheme in DAYS which takes care of some these
problems. We show with simulation results that our scheme
outperforms some of the earlier indexing schemes for
broadcasting LDD in terms of tuning time.
The rest of the paper is presented as follows. In section 2, we
discuss previous work related to indexing of broadcast data.
Section 3 describes our DAYS architecture. Location dependent
data, its generation and subsequent broadcast is presented in
section 4. Section 5 discusses our indexing scheme in detail.
Simulation of our scheme and its performance evaluation is
presented in section 6. Section 7 concludes the paper and
mentions future related work.
PREVIOUS WORK
Several disk-based indexing techniques have been used for air
indexing. Imielinski et al. [5, 6] applied the B+ index tree, where
the leaf nodes store the arrival times of the data items. The
distributed indexing method was proposed to efficiently replicate
and distribute the index tree in a broadcast. Specifically, the index
tree is divided into a replicated part and a non replicated part.
Each broadcast consists of the replicated part and the non-replicated
part that indexes the data items immediately following
it. As such, each node in the non-replicated part appears only once
in a broadcast and, hence, reduces the replication cost and access
latency while achieving a good tuning time. Chen et al. [2] and
Shivakumar et al. [8] considered unbalanced tree structures to
optimize energy consumption for non-uniform data access. These
structures minimize the average index search cost by reducing the
number of index searches for hot data at the expense of spending
more on cold data. Tan and Yu discussed data and index
organization under skewed broadcast Hashing and signature
methods have also been suggested for wireless broadcast that
supports equality queries [9]. A flexible indexing method was
proposed in [5]. The flexible index first sorts the data items in
ascending (or descending) order of the search key values and then
divides them into p segments. The first bucket in each data
segment contains a control index, which is a binary index
mapping a given key value to the segment containing that key,
and a local index, which is an m-entry index mapping a given key
value to the buckets within the current segment. By tuning the
parameters of p and m, mobile clients can achieve either a good
tuning time or good access latency. Another indexing technique
proposed is the exponential indexing scheme [17]. In this scheme,
a parameterized index, called the exponential index is used to
optimize the access latency or the tuning time. It facilitates index
replication by linking different search trees. All of the above
mentioned schemes have been applied to data which are non
related to each other. These non related data may be clustered or
non clustered. However, none of them has specifically addressed
the requirements of LDD. Location dependent data are data which
are associated with a location. Presently there are several
applications that deal with LDD [13, 16]. Almost all of them
depict LDD with the help of hierarchical structures [3, 4]. This is
based on the containment property of location dependent data.
The Containment property helps determining relative position of
an object by defining or identifying locations that contains those
objects. The subordinate locations are hierarchically related to
each other. Thus, Containment property limits the range of
availability or operation of a service. We use this containment
property in our indexing scheme to index LDD.
DAYS ARCHITECTURE
DAYS has been conceptualized to disseminate topical and non-topical
data to users in a local broadcast space and to accept
queries from individual users globally. Topical data, for example,
weather information, traffic information, stock information, etc.,
constantly changes over time. Non topical data such as hotel,
restaurant, real estate prices, etc., do not change so often. Thus,
we envision the presence of two types of data distribution: In the
first case, server pushes data to local users through wireless
channels. The other case deals with the server sending results of
user queries through downlink wireless channels. Technically, we
see the presence of two types of queues in the pull based data
access. One is a heavily loaded queue containing globally
uploaded queries. The other is a comparatively lightly loaded
queue consisting of locally uploaded queries. The DAYS
architecture [12] as shown in figure 1 consists of a Data Server,
Broadcast Scheduler, DAYS Coordinator, Network of LEO
satellites for global data delivery and a Local broadcast space.
Data is pushed into the local broadcast space so that users may
tune into the wireless channels to access the data. The local
broadcast space consists of a broadcast tower, mobile units and a
network of data staging machines called the surrogates. Data
staging in surrogates has been earlier investigated as a successful
technique [12, 15] to cache users' related data. We believe that
data staging can be used to drastically reduce the latency time for
both the local broadcast data as well as global responses. Query
request in the surrogates may subsequently be used to generate the
popularity patterns which ultimately decide the broadcast
schedule [12].
18
Popularity
Feedback from
Surrogates for
Broadcast
Scheduler
Local Broadcast Space
Broadcast Tower
Surrogate
MU
MU
MU
MU
Data Server
Broadcast scheduler
DAYS Coordinator
Local downlink
channel
Global downlink
channel
Pull request queue
Global request queue
Local request queue
Location based index
Starbucks
Plaza
Kansas
City
Figure 1. DAYS Architecture
Figure 2. Location Structure of Starbucks, Plaza
LOCATION DEPENDENT DATA (LDD)
We argue that incorporating location information in wireless data
broadcast can significantly decrease the access latency. This
property becomes highly useful for mobile unit which has limited
storage and processing capability. There are a variety of
applications to obtain information about traffic, restaurant and
hotel booking, fast food, gas stations, post office, grocery stores,
etc. If these applications are coupled with location information,
then the search will be fast and highly cost effective. An important
property of the locations is Containment which helps to determine
the relative location of an object with respect to its parent that
contains the object. Thus, Containment limits the range of
availability of a data. We use this property in our indexing
scheme. The database contains the broadcast contents which are
converted into LDD [14] by associating them with respective
locations so that it can be broadcasted in a clustered manner. The
clustering of LDD helps the user to locate information efficiently
and supports containment property. We present an example to
justify our proposition.
Example: Suppose a user issues query "Starbucks Coffee in
Plaza please." to access information about the Plaza branch of
Starbucks Coffee in Kansas City. In the case of location
independent set up the system will list all Starbucks coffee shops
in Kansas City area. It is obvious that such responses will
increase access latency and are not desirable. These can be
managed efficiently if the server has location dependent data, i.e.,
a mapping between a Starbucks coffee shop data and its physical
location. Also, for a query including range of locations of
Starbucks, a single query requesting locations for the entire
region of Kansas City, as shown in Figure 2, will suffice. This
will save enormous amount of bandwidth by decreasing the
number of messages and at the same time will be helpful in
preventing the scalability bottleneck in highly populated area.
4.1 Mapping Function for LDD
The example justifies the need for a mapping function to process
location dependent queries. This will be especially important for
pull based queries across the globe for which the reply could be
composed for different parts of the world. The mapping function
is necessary to construct the broadcast schedule.
We define Global Property Set (GPS) [11], Information Content
(IC) set, and Location Hierarchy (LH) set where IC
GPS and
LH
GPS to develop a mapping function. LH = {l
1
, l
2
, l
3
...,l
k
}
where l
i
represent locations in the location tree and IC = {ic
1
, ic
2
,
ic
3
,...,ic
n
} where ic
i
represent information type. For example, if
we have traffic, weather, and stock information are in broadcast
then IC = {ic
traffic
, ic
weather
, and ic
stock
}. The mapping scheme must
be able to identify and select an IC member and a LH node for (a)
correct association, (b) granularity match, (c) and termination
condition. For example, weather
IC could be associated with a
country or a state or a city or a town of LH. The granularity match
between the weather and a LH node is as per user requirement.
Thus, with a coarse granularity weather information is associated
with a country to get country's weather and with town in a finer
granularity. If a town is the finest granularity, then it defines the
terminal condition for association between IC and LH for weather.
This means that a user cannot get weather information about
subdivision of a town. In reality weather of a subdivision does
not make any sense.
We develop a simple heuristic mapping approach scheme based
on user requirement. Let IC = {m
1
, m
2
,m
3 .,...,
m
k
}, where m
i
represent its element and let LH = {n
1
, n
2
, n
3, ...,
n
l
}, where n
i
represents LH's member. We define GPS for IC (GPSIC)
GPS
and for LH (GPSLH)
GPS as GPSIC = {P
1
, P
2
,..., P
n
}, where
P
1
, P
2
, P
3
,..., P
n
are properties of its members and GPSLH = {Q
1
,
Q
2
,..., Q
m
} where Q
1
, Q
2
,..., Q
m
are properties of its members.
The properties of a particular member of IC are a subset of
GPSIC. It is generally true that (property set (m
i
IC)
property
set (m
j
IC))
, however, there may be cases where the
intersection is not null. For example, stock
IC and movie
IC
rating do not have any property in common. We assume that any
two or more members of IC have at least one common
geographical property (i.e. location) because DAYS broadcasts
information about those categories, which are closely tied with a
location. For example, stock of a company is related to a country,
weather is related to a city or state, etc.
We define the property subset of m
i
IC as PSm
i
m
i
IC and
PSm
i
= {P
1
, P
2
, ..., P
r
} where r n.
P
r
{P
r
PSm
i
P
r
GPS
IC
} which implies that
i, PSm
i
GPS
IC
. The geographical
properties of this set are indicative of whether m
i
IC can be
mapped to only a single granularity level (i.e. a single location) in
LH or a multiple granularity levels (i.e. more than one nodes in
19
the hierarchy) in LH. How many and which granularity levels
should a m
i
map to, depends upon the level at which the service
provider wants to provide information about the m
i
in question.
Similarly we define a property subset of LH members as PSn
j
n
j
LH which can be written as PSn
j
={Q
1
, Q
2
, Q
3
, ..., Q
s
} where s
m. In addition,
Q
s
{Q
s
PSn
j
Q
s
GPS
LH
} which implies that
j, PSn
j
GPSLH.
The process of mapping from IC to LH is then identifying for
some m
x
IC one or more n
y
LH such that PSm
x
PSn
v
.
This means that when m
x
maps to n
y
and all children of n
y
if m
x
can map to multiple granularity levels or m
x
maps only to n
y
if m
x
can map to a single granularity level.
We assume that new members can join and old member can leave
IC or LH any time. The deletion of members from the IC space is
simple but addition of members to the IC space is more restrictive.
If we want to add a new member to the IC space, then we first
define a property set for the new member: PSm
new_m
={P
1
, P
2
, P
3
,
..., P
t
} and add it to the IC only if the condition:
P
w
{P
w
PSp
new_m
P
w
GPS
IC
} is satisfied. This scheme has an
additional benefit of allowing the information service providers to
have a control over what kind of information they wish to provide
to the users. We present the following example to illustrate the
mapping concept.
IC = {Traffic, Stock, Restaurant, Weather, Important history
dates, Road conditions}
LH = {Country, State, City, Zip-code, Major-roads}
GPS
IC
= {Surface-mobility, Roads, High, Low, Italian-food,
StateName, Temp, CityName, Seat-availability, Zip, Traffic-jams,
Stock-price, CountryName, MajorRoadName, Wars, Discoveries,
World}
GPS
LH
= {Country, CountrySize, StateName, CityName, Zip,
MajorRoadName}
Ps(IC
Stock
) = {Stock-price, CountryName, High, Low}
Ps(IC
Traffic
) = {Surface-mobility, Roads, High, Low, Traffic-jams,
CityName}
Ps(IC
Important dates in history
) = {World, Wars, Discoveries}
Ps(IC
Road conditions
) = {Precipitation, StateName, CityName}
Ps(IC
Restaurant
) = {Italian-food, Zip code}
Ps(IC
Weather
) = {StateName, CityName, Precipitation,
Temperature}
PS(LH
Country
) = {CountryName, CountrySize}
PS(LH
State
= {StateName, State size},
PS(LH
City
) ={CityName, City size}
PS(LH
Zipcode
) = {ZipCodeNum }
PS(LH
Major roads
) = {MajorRoadName}
Now, only PS(IC
Stock
)
PS
Country
. In addition, PS(IC
Stock
)
indicated that Stock can map to only a single location Country.
When we consider the member Traffic of IC space, only
PS(IC
Traffic
)
PS
city
. As PS(IC
Traffic
) indicates that Traffic can
map to only a single location, it maps only to City and none of its
children. Now unlike Stock, mapping of Traffic with Major roads,
which is a child of City, is meaningful. However service providers
have right to control the granularity levels at which they want to
provide information about a member of IC space.
PS(IC
Road conditions
)
PS
State
and PS(IC
Road conditions
)
PS
City
.
So Road conditions maps to State as well as City. As PS(IC
Road
conditions
) indicates that Road conditions can map to multiple
granularity levels, Road conditions will also map to Zip Code and
Major roads, which are the children of State and City. Similarly,
Restaurant maps only to Zip code, and Weather maps to State,
City and their children, Major Roads and Zip Code.
LOCATION BASED INDEXING SCHEME
This section discusses our location based indexing scheme
(LBIS). The scheme is designed to conform to the LDD broadcast
in our project DAYS. As discussed earlier, we use the
containment property of LDD in the indexing scheme. This
significantly limits the search of our required data to a particular
portion of broadcast. Thus, we argue that the scheme provides
bounded tuning time.
We describe the architecture of our indexing scheme. Our scheme
contains separate data buckets and index buckets. The index
buckets are of two types. The first type is called the Major index.
The Major index provides information about the types of data
broadcasted. For example, if we intend to broadcast information
like Entertainment, Weather, Traffic etc., then the major index
points to either these major types of information and/or their main
subtypes of information, the number of main subtypes varying
from one information to another. This strictly limits number of
accesses to a Major index. The Major index never points to the
original data. It points to the sub indexes called the Minor index.
The minor indexes are the indexes which actually points to the
original data. We called these minor index pointers as Location
Pointers as they points to the data which are associated with a
location. Thus, our search for a data includes accessing of a major
index and some minor indexes, the number of minor index
varying depending on the type of information.
Thus, our indexing scheme takes into account the hierarchical
nature of the LDD, the Containment property, and requires our
broadcast schedule to be clustered based on data type and
location. The structure of the location hierarchy requires the use
of different types of index at different levels. The structure and
positions of index strictly depend on the location hierarchy as
described in our mapping scheme earlier. We illustrate the
implementation of our scheme with an example. The rules for
framing the index are mentioned subsequently.
20
A1
Entertainment
Resturant
Movie
A2
A3
A4
R1
R2
R3
R4
R5
R6
R7
R8
Weather
KC
SL
JC
SF
Entertainment
R1 R2 R3 R4
R5 R6
R7 R8
KC SL JC SF
(A, R, NEXT = 8)
3, R5
4, R7
Type (S, L)
ER
W
E
EM
(1, 4)
(5, 4)
(1, 4), (9, 4)
(9, 4)
Type (S, L)
W
E
EM
ER
(1, 4)
(5, 8)
(5, 4)
(9, 4)
Type (S, L)
E
EM
ER
W
(1, 8)
(1, 4)
(5, 4)
(9, 4)
A1 A2 A3 A4
Movie Resturant Weather
1 2 3 4 5 6 7 8 9 10 11 12
Major index
Major index
Major index
Minor index
Major index
Minor index
Figure 3. Location Mapped Information for Broadcast Figure 4. Data coupled with Location based Index
Example: Let us suppose that our broadcast content contains
IC
entertainment
and IC
weather
which is represented as shown in Fig. 3.
Ai represents Areas of City and Ri represents roads in a certain
area. The leaves of Weather structure represent four cities. The
index structure is given in Fig. 4 which shows the position of
major and minor index and data in the broadcast schedule.
We propose the following rules for the creation of the air indexed
broadcast schedule:
The major index and the minor index are created.
The major index contains the position and range of different
types of data items (Weather and Entertainment, Figure 3)
and their categories. The sub categories of Entertainment,
Movie and Restaurant, are also in the index. Thus, the major
index contains Entertainment (E), Entertainment-Movie
(EM), Entertainment-Restaurant (ER), and Weather (W). The
tuple (S, L) represents the starting position (S) of the data
item and L represents the range of the item in terms of
number of data buckets.
The minor index contains the variables A, R and a pointer
Next. In our example (Figure 3), road R represents the first
node of area A. The minor index is used to point to actual
data buckets present at the lowest levels of the hierarchy. In
contrast, the major index points to a broader range of
locations and so it contains information about main and sub
categories of data.
Index information is not incorporated in the data buckets.
Index buckets are separate containing only the control
information.
The number of major index buckets m=#(IC), IC = {ic
1
, ic
2
,
ic
3
,...,ic
n
} where ic
i
represent information type and #
represents the cardinality of the Information Content set IC.
In this example, IC= {ic
Movie
, ic
Weather
, ic
Restaurant
} and so
#(IC) =3. Hence, the number of major index buckets is 3.
Mechanism to resolve the query is present in the java based
coordinator in MU. For example, if a query Q is presented as
Q (Entertainment, Movie, Road_1), then the resultant search
will be for the EM information in the major index. We say,
Q EM.
Our proposed index works as follows: Let us suppose that an MU
issues a query which is represented by Java Coordinator present in
the MU as "Restaurant information on Road 7". This is resolved
by the coordinator as Q ER. This means one has to search for
ER unit of index in the major index. Let us suppose that the MU
logs into the channel at R2. The first index it receives is a minor
index after R2. In this index, value of Next variable = 4, which
means that the next major index is present after bucket 4. The MU
may go into doze mode. It becomes active after bucket 4 and
receives the major index. It searches for ER information which is
the first entry in this index. It is now certain that the MU will get
the position of the data bucket in the adjoining minor index. The
second unit in the minor index depicts the position of the required
data R7. It tells that the data bucket is the first bucket in Area 4.
The MU goes into doze mode again and becomes active after
bucket 6. It gets the required data in the next bucket. We present
the algorithm for searching the location based Index.
Algorithm 1 Location based Index Search in DAYS
1. Scan broadcast for the next index bucket, found=false
2. While (not found) do
3. if bucket is Major Index then
4. Find the Type & Tuple (S, L)
5. if S is greater than 1, go into doze mode for S seconds
6. end if
7. Wake up at the S
th
bucket and observe the Minor Index
8. end if
9. if bucket is Minor Index then
10. if Type
Requested
not equal to Type
found
and (A,R)
Request
not
equal to (A,R)
found
then
11. Go into doze mode till NEXT & repeat from step 3
12. end if
13. else find entry in Minor Index which points to data
14. Compute time of arrival T of data bucket
15. Go into doze mode till T
16. Wake up at T and access data, found = true
17. end else
18. end if
19. end While
21
PERFORMANCE EVALUATION
Conservation of energy is the main concern when we try to access
data from wireless broadcast. An efficient scheme should allow
the mobile device to access its required data by staying active for
a minimum amount of time. This would save considerable amount
of energy. Since items are distributed based on types and are
mapped to suitable locations, we argue that our broadcast deals
with clustered data types. The mobile unit has to access a larger
major index and a relatively much smaller minor index to get
information about the time of arrival of data. This is in contrast to
the exponential scheme where the indexes are of equal sizes. The
example discussed and Algorithm 1 reveals that to access any
data, we need to access the major index only once followed by
one or more accesses to the minor index. The number of minor
index access depends on the number of internal locations. As the
number of internal locations vary for item to item (for example,
Weather is generally associated with a City whereas traffic is
granulated up to major and minor roads of a city), we argue that
the structure of the location mapped information may be
visualized as a forest which is a collection of general trees, the
number of general trees depending on the types of information
broadcasted and depth of a tree depending on the granularity of
the location information associated with the information.
For our experiments, we assume the forest as a collection of
balanced M-ary trees. We further assume the M-ary trees to be
full by assuming the presence of dummy nodes in different levels
of a tree.
Thus, if the number of data items is d and the height of the tree is
m, then
n= (m*d-1)/(m-1) where n is the number of vertices in the tree and
i= (d-1)/(m-1) where i is the number of internal vertices.
Tuning time for a data item involves 1 unit of time required to
access the major index plus time required to access the data items
present in the leaves of the tree.
Thus, tuning time with d data items is t = log
m
d+1
We can say that tuning time is bounded by O(log
m
d).
We compare our scheme with the distributed indexing and
exponential scheme. We assume a flat broadcast and number of
pages varying from 5000 to 25000. The various simulation
parameters are shown in Table 1.
Figure 5-8 shows the relative tuning times of three indexing
algorithms, ie, the LBIS, exponential scheme and the distributed
tree scheme. Figure 5 shows the result for number of internal
location nodes = 3. We can see that LBIS significantly
outperforms both the other schemes. The tuning time in LBIS
ranges from approx 6.8 to 8. This large tuning time is due to the
fact that after reaching the lowest minor index, the MU may have
to access few buckets sequentially to get the required data bucket.
We can see that the tuning time tends to become stable as the
length of broadcast increases. In figure 6 we consider m= 4. Here
we can see that the exponential and the distributed perform almost
similarly, though the former seems to perform slightly better as
the broadcast length increases. A very interesting pattern is visible
in figure 7. For smaller broadcast size, the LBIS seems to have
larger tuning time than the other two schemes. But as the length of
broadcast increases, it is clearly visible the LBIS outperforms the
other two schemes. The Distributed tree indexing shows similar
behavior like the LBIS. The tuning time in LBIS remains low
because the algorithm allows the MU to skip some intermediate
Minor Indexes. This allows the MU to move into lower levels
directly after coming into active mode, thus saving valuable
energy. This action is not possible in the distributed tree indexing
and hence we can observe that its tuning time is more than the
LBIS scheme, although it performs better than the exponential
scheme. Figure 8, in contrast, shows us that the tuning time in
LBIS, though less than the other two schemes, tends to increase
sharply as the broadcast length becomes greater than the 15000
pages. This may be attributed both due to increase in time
required to scan the intermediate Minor Indexes and the length of
the broadcast. But we can observe that the slope of the LBIS
curve is significantly less than the other two curves.
Table 1 Simulation Parameters
P
Definition Values
N Number of data Items
5000 - 25000
m Number of internal location nodes
3, 4, 5, 6
B Capacity of bucket without index (for
exponential index)
10,64,128,256
i
Index base for exponential index
2,4,6,8
k Index size for distributed tree
8 bytes
The simulation results establish some facts about our
location based indexing scheme. The scheme performs
better than the other two schemes in terms of tuning time in
most of the cases. As the length of the broadcast increases, after a
certain point, though the tuning time increases as a result of
factors which we have described before, the scheme always
performs better than the other two schemes. Due to the prescribed
limit of the number of pages in the paper, we are unable to show
more results. But these omitted results show similar trend as the
results depicted in figure 5-8.
CONCLUSION AND FUTURE WORK
In this paper we have presented a scheme for mapping of wireless
broadcast data with their locations. We have presented an example
to show how the hierarchical structure of the location tree maps
with the data to create LDD. We have presented a scheme called
LBIS to index this LDD. We have used the containment property
of LDD in the scheme that limits the search to a narrow range of
data in the broadcast, thus saving valuable energy in the device.
The mapping of data with locations and the indexing scheme will
be used in our DAYS project to create the push based
architecture. The LBIS has been compared with two other
prominent indexing schemes, i.e., the distributed tree indexing
scheme and the exponential indexing scheme. We showed in our
simulations that the LBIS scheme has the lowest tuning time for
broadcasts having large number of pages, thus saving valuable
battery power in the MU.
22
In the future work we try to incorporate pull based architecture in
our DAYS project. Data from the server is available for access by
the global users. This may be done by putting a request to the
source server. The query in this case is a global query. It is
transferred from the user's source server to the destination server
through the use of LEO satellites. We intend to use our LDD
scheme and data staging architecture in the pull based architecture.
We will show that the LDD scheme together with the data staging
architecture significantly improves the latency for global as well as
local query.
REFERENCES
[1] Acharya, S., Alonso, R. Franklin, M and Zdonik S. Broadcast
disk: Data management for asymmetric communications
environments. In Proceedings of ACM SIGMOD Conference
on Management of Data, pages 199210, San Jose, CA, May
1995.
[2] Chen, M.S.,Wu, K.L. and Yu, P. S. Optimizing index
allocation for sequential data broadcasting in wireless mobile
computing. IEEE Transactions on Knowledge and Data
Engineering (TKDE), 15(1):161173, January/February 2003.
Figure 5. Broadcast Size (# buckets)
Dist tree
Expo
LBIS
Figure 6. Broadcast Size (# buckets)
Dist tree
Expo
LBIS
Figure 7. Broadcast Size (# buckets)
Dist tree
Expo
LBIS
Figure 8. Broadcast Size (# buckets)
Dist tree
Expo
LBIS
Av
era
g
e
tunin
g
t
i
me
Av
era
g
e t
u
n
i
n
g
t
i
me
Av
era
g
e
tunin
g
t
i
me
Av
era
g
e t
u
n
i
n
g
t
i
me
23
[3] Hu, Q. L., Lee, D. L. and Lee, W.C. Performance evaluation
of a wireless hierarchical data dissemination system. In
Proceedings of the 5
th
Annual ACM International Conference
on Mobile Computing and Networking (MobiCom'99), pages
163173, Seattle, WA, August 1999.
[4] Hu, Q. L. Lee, W.C. and Lee, D. L. Power conservative
multi-attribute queries on data broadcast. In Proceedings of
the 16th International Conference on Data Engineering
(ICDE'00), pages 157166, San Diego, CA, February 2000.
[5] Imielinski, T., Viswanathan, S. and Badrinath. B. R. Power
efficient filtering of data on air. In Proceedings of the 4th
International Conference on Extending Database Technology
(EDBT'94), pages 245258, Cambridge, UK, March 1994.
[6] Imielinski, T., Viswanathan, S. and Badrinath. B. R. Data on
air Organization and access. IEEE Transactions on
Knowledge and Data Engineering (TKDE), 9(3):353372,
May/June 1997.
[7] Shih, E., Bahl, P. and Sinclair, M. J. Wake on wireless: An
event driven energy saving strategy for battery operated
devices. In Proceedings of the 8th Annual ACM International
Conference on Mobile Computing and Networking
(MobiCom'02), pages 160171, Atlanta, GA, September
2002.
[8] Shivakumar N. and Venkatasubramanian, S. Energy-efficient
indexing for information dissemination in wireless systems.
ACM/Baltzer Journal of Mobile Networks and Applications
(MONET), 1(4):433446, December 1996.
[9] Tan K. L. and Yu, J. X. Energy efficient filtering of non
uniform broadcast. In Proceedings of the 16th International
Conference on Distributed Computing Systems (ICDCS'96),
pages 520527, Hong Kong, May 1996.
[10] Viredaz, M. A., Brakmo, L. S. and Hamburgen, W. R. Energy
management on handheld devices. ACM Queue, 1(7):4452,
October 2003.
[11] Garg, N. Kumar, V., & Dunham, M.H. "Information Mapping
and Indexing in DAYS", 6th International Workshop on
Mobility in Databases and Distributed Systems, in
conjunction with the 14th International Conference on
Database and Expert Systems Applications September 1-5,
Prague, Czech Republic, 2003.
[12] Acharya D., Kumar, V., & Dunham, M.H. InfoSpace: Hybrid
and Adaptive Public Data Dissemination System for
Ubiquitous Computing". Accepted for publication in the
special issue of Pervasive Computing. Wiley Journal for
Wireless Communications and Mobile Computing, 2004.
[13] Acharya D., Kumar, V., & Prabhu, N. Discovering and using
Web Services in M-Commerce, Proceedings for 5th VLDB
Workshop on Technologies for E-Services, Toronto,
Canada,2004.
[14] Acharya D., Kumar, V. Indexing Location Dependent Data in
broadcast environment. Accepted for publication, JDIM
special issue on Distributed Data Management, 2005.
[15] Flinn, J., Sinnamohideen, S., & Satyanarayan, M. Data
Staging on Untrusted Surrogates, Intel Research, Pittsburg,
Unpublished Report, 2003.
[16] Seydim, A.Y., Dunham, M.H. & Kumar, V. Location
dependent query processing, Proceedings of the 2nd ACM
international workshop on Data engineering for wireless and
mobile access, p.47-53, Santa Barbara, California, USA,
2001.
[17]
Xu, J., Lee, W.C., Tang., X. Exponential Index: A
Parameterized Distributed Indexing Scheme for Data on Air.
In Proceedings of the 2nd ACM/USENIX International
Conference on Mobile Systems, Applications, and Services
(MobiSys'04), Boston, MA, June 2004.
24
| containment;indexing scheme;access efficiency;indexing;Wireless data broadcast;mapping function;location based services;wireless;energy conservation;location dependent data;broadcast;push based architecture;data dissemination;data staging |
129 | Lossless Online Bayesian Bagging | Bagging frequently improves the predictive performance of a model. An online version has recently been introduced, which attempts to gain the benefits of an online algorithm while approximating regular bagging. However, regular online bagging is an approximation to its batch counterpart and so is not lossless with respect to the bagging operation. By operating under the Bayesian paradigm, we introduce an online Bayesian version of bagging which is exactly equivalent to the batch Bayesian version, and thus when combined with a lossless learning algorithm gives a completely lossless online bagging algorithm. We also note that the Bayesian formulation resolves a theoretical problem with bagging, produces less variability in its estimates, and can improve predictive performance for smaller data sets. | Introduction
In a typical prediction problem, there is a trade-off between bias and variance, in that after
a certain amount of fitting, any increase in the precision of the fit will cause an increase in
the prediction variance on future observations. Similarly, any reduction in the prediction
variance causes an increase in the expected bias for future predictions. Breiman (1996)
introduced bagging as a method of reducing the prediction variance without affecting the
prediction bias. As a result, predictive performance can be significantly improved.
Bagging, short for "Bootstrap AGGregatING", is an ensemble learning method. Instead
of making predictions from a single model fit on the observed data, bootstrap samples
are taken of the data, the model is fit on each sample, and the predictions are averaged
over all of the fitted models to get the bagged prediction. Breiman (1996) explains that
bagging works well for unstable modeling procedures, i.e. those for which the conclusions
are sensitive to small changes in the data. He also gives a theoretical explanation of how
bagging works, demonstrating the reduction in mean-squared prediction error for unstable
c 2004 Herbert K. H. Lee and Merlise A. Clyde.
Lee and Clyde
procedures. Breiman (1994) demonstrated that tree models, among others, are empirically
unstable.
Online bagging (Oza and Russell, 2001) was developed to implement bagging sequentially
, as the observations appear, rather than in batch once all of the observations have
arrived. It uses an asymptotic approximation to mimic the results of regular batch bagging,
and as such it is not a lossless algorithm. Online algorithms have many uses in modern
computing. By updating sequentially, the update for a new observation is relatively quick
compared to re-fitting the entire database, making real-time calculations more feasible.
Such algorithms are also advantageous for extremely large data sets where reading through
the data just once is time-consuming, so batch algorithms which would require multiple
passes through the data would be infeasible.
In this paper, we consider a Bayesian version of bagging (Clyde and Lee, 2001) based
on the Bayesian bootstrap (Rubin, 1981). This overcomes a technical difficulty with the
usual bootstrap in bagging. It also leads to a theoretical reduction in variance over the
bootstrap for certain classes of estimators, and a significant reduction in observed variance
and error rates for smaller data sets. We present an online version which, when combined
with a lossless online model-fitting algorithm, continues to be lossless with respect to the
bagging operation, in contrast to ordinary online bagging. The Bayesian approach requires
the learning base algorithm to accept weighted samples.
In the next section we review the basics of the bagging algorithm, of online bagging,
and of Bayesian bagging. Next we introduce our online Bayesian bagging algorithm. We
then demonstrate its efficacy with classification trees on a variety of examples.
Bagging
In ordinary (batch) bagging, bootstrap re-sampling is used to reduce the variability of an
unstable estimator. A particular model or algorithm, such as a classification tree, is specified
for learning from a set of data and producing predictions. For a particular data set X
m
,
denote the vector of predictions (at the observed sites or at new locations) by G(X
m
).
Denote the observed data by X = (x
1
, . . . , x
n
). A bootstrap sample of the data is a
sample with replacement, so that X
m
= (x
m
1
, . . . , x
m
n
), where m
i
{1, . . . , n} with
repetitions allowed. X
m
can also be thought of as a re-weighted version of X, where the
weights,
(m)
i
are drawn from the set {0,
1
n
,
2
n
, . . . , 1}, i.e., n
(m)
i
is the number of times that
x
i
appears in the mth bootstrap sample. We denote the weighted sample as (X,
(m)
). For
each bootstrap sample, the model produces predictions G(X
m
) = G(X
m
)
1
, . . . , G(X
m
)
P
where P is the number of prediction sites. M total bootstrap samples are used. The bagged
predictor for the jth element is then
1
M
M
m
=1
G(X
m
)
j
= 1
M
M
m
=1
G(X,
(m)
)
j
,
or in the case of classification, the jth element is predicted to belong to the most frequently
predicted category by G(X
1
)
j
, . . . , G(X
M
)
j
.
A version of pseudocode for implementing bagging is
1. For m {1, . . . , M },
144
Lossless Online Bayesian Bagging
(a) Draw a bootstrap sample, X
m
, from X.
(b) Find predicted values G(X
m
).
2. The bagging predictor is
1
M
M
m
=1
G
(X
m
).
Equivalently, the bootstrap sample can be converted to a weighted sample (X,
(m)
) where
the weights
(m)
i
are found by taking the number of times x
i
appears in the bootstrap sample
and dividing by n. Thus the weights will be drawn from {0,
1
n
,
2
n
, . . . , 1} and will sum to
1. The bagging predictor using the weighted formulation is
1
M
M
m
=1
G
(X
m
,
(m)
) for
regression, or the plurality vote for classification.
2.1 Online Bagging
Online bagging (Oza and Russell, 2001) was recently introduced as a sequential approximation
to batch bagging. In batch bagging, the entire data set is collected, and then bootstrap
samples are taken from the whole database. An online algorithm must process observations
as they arrive, and thus each observation must be resampled a random number of times
when it arrives. The algorithm proposed by Oza and Russell resamples each observation
according to a Poisson random variable with mean 1, i.e., P (K
m
= k) = exp(-1)/k!, where
K
m
is the number of resamples in "bootstrap sample" m, K
m
{0, 1, . . .}. Thus as each
observation arrives, it is added K
m
times to X
m
, and then G(X
m
) is updated, and this is
done for m {1, . . . , M }.
Pseudocode for online bagging is
For i {1, . . . , n},
1. For m {1, . . . , M },
(a) Draw a weight K
m
from a Poisson(1) random variable and add K
m
copies
of x
i
to X
m
.
(b) Find predicted values G(X
m
).
2. The current bagging predictor is
1
M
M
m
=1
G
(X
m
).
Ideally, step 1(b) is accomplished with a lossless online update that incorporates the K
m
new points without refitting the entire model. We note that n may not be known ahead of
time, but the bagging predictor is a valid approximation at each step.
Online bagging is not guaranteed to produce the same results as batch bagging. In
particular, it is easy to see that after n points have been observed, there is no guarantee
that X
m
will contain exactly n points, as the Poisson weights are not constrained to add
up to n like a regular bootstrap sample. While it has been shown (Oza and Russell, 2001)
that these samples converge asymptotically to the appropriate bootstrap samples, there
may be some discrepancy in practice. Thus while it can be combined with a lossless online
learning algorithm (such as for a classification tree), the bagging part of the online ensemble
procedure is not lossless.
145
Lee and Clyde
2.2 Bayesian Bagging
Ordinary bagging is based on the ordinary bootstrap, which can be thought of as replacing
the original weights of
1
n
on each point with weights from the set {0,
1
n
,
2
n
, . . . , 1}, with the
total of all weights summing to 1. A variation is to replace the ordinary bootstrap with
the Bayesian bootstrap (Rubin, 1981). The Bayesian approach treats the vector of weights
as unknown parameters and derives a posterior distribution for , and hence G(X, ).
The non-informative prior
n
i
=1
-1
i
, when combined with the multinomial likelihood, leads
to a Dirichlet
n
(1, . . . , 1) distribution for the posterior distribution of . The full posterior
distribution of G(X, ) can be estimated by Monte Carlo methods: generate
(m)
from
a Dirichlet
n
(1, . . . , 1) distribution and then calculate G(X,
(m)
) for each sample. The
average of G(X,
(m)
) over the M samples corresponds to the Monte Carlo estimate of
the posterior mean of G(X, ) and can be viewed as a Bayesian analog of bagging (Clyde
and Lee, 2001).
In practice, we may only be interested in a point estimate, rather than the full posterior
distribution. In this case, the Bayesian bootstrap can be seen as a continuous version of
the regular bootstrap. Thus Bayesian bagging can be achieved by generating M Bayesian
bootstrap samples, and taking the average or majority vote of the G(X,
(m)
). This is
identical to regular bagging except that the weights are continuous-valued on (0, 1), instead
of being restricted to the discrete set {0,
1
n
,
2
n
, . . . , 1}. In both cases, the weights must sum
to 1. In both cases, the expected value of a particular weight is
1
n
for all weights, and the
expected correlation between weights is the same (Rubin, 1981). Thus Bayesian bagging
will generally have the same expected point estimates as ordinary bagging. The variability
of the estimate is slightly smaller under Bayesian bagging, as the variability of the weights
is
n
n
+1
times that of ordinary bagging. As the sample size grows large, this factor becomes
arbitrarily close to one, but we do note that it is strictly less than one, so the Bayesian
approach does give a further reduction in variance compared to the standard approach. In
practice, for smaller data sets, we often find a significant reduction in variance, possibly
because the use of continuous-valued weights leads to fewer extreme cases than discrete-valued
weights.
Pseudocode for Bayesian bagging is
1. For m {1, . . . , M },
(a) Draw random weights
(m)
from a Dirichlet
n
(1, . . . , 1) to produce the Bayesian
bootstrap sample (X,
(m)
).
(b) Find predicted values G(X,
(m)
).
2. The bagging predictor is
1
M
M
m
=1
G
(X,
(m)
).
Use of the Bayesian bootstrap does have a major theoretical advantage, in that for
some problems, bagging with the regular bootstrap is actually estimating an undefined
quantity. To take a simple example, suppose one is bagging the fitted predictions for a
point y from a least-squares regression problem. Technically, the full bagging estimate is
1
M
0
m
^
y
m
where m ranges over all possible bootstrap samples, M
0
is the total number
of possible bootstrap samples, and ^
y
m
is the predicted value from the model fit using the
mth bootstrap sample. The issue is that one of the possible bootstrap samples contains the
146
Lossless Online Bayesian Bagging
first data point replicated n times, and no other data points. For this bootstrap sample,
the regression model is undefined (since at least two different points are required), and so
^
y and thus the bagging estimator are undefined. In practice, only a small sample of the
possible bootstrap samples is used, so the probability of drawing a bootstrap sample with an
undefined prediction is very small. Yet it is disturbing that in some problems, the bagging
estimator is technically not well-defined. In contrast, the use of the Bayesian bootstrap
completely avoids this problem. Since the weights are continuous-valued, the probability
that any weight is exactly equal to zero is zero. Thus with probability one, all weights
are strictly positive, and the Bayesian bagging estimator will be well-defined (assuming the
ordinary estimator on the original data is well-defined).
We note that the Bayesian approach will only work with models that have learning algorithms
that handle weighted samples. Most standard models either have readily available
such algorithms, or their algorithms are easily modified to accept weights, so this restriction
is not much of an issue in practice.
Online Bayesian Bagging
Regular online bagging cannot be exactly equivalent to the batch version because the Poisson
counts cannot be guaranteed to sum to the number of actual observations. Gamma random
variables can be thought of as continuous analogs of Poisson counts, which motivates our
derivation of Bayesian online bagging. The key is to recall a fact from basic probability -- a
set of independent gamma random variables divided by its sum has a Dirichlet distribution,
i.e.,
If w
i
(
i
, 1), then
w
1
w
i
, w
2
w
i
, . . . , w
k
w
i
Dirichlet
n
(
1
,
2
, . . . ,
k
) .
(See for example, Hogg and Craig, 1995, pp. 187188.) This relationship is a common
method for generating random draws from a Dirichlet distribution, and so is also used in
the implementation of batch Bayesian bagging in practice.
Thus in the online version of Bayesian bagging, as each observation arrives, it has a
realization of a Gamma(1) random variable associated with it for each bootstrap sample,
and the model is updated after each new weighted observation. If the implementation of the
model requires weights that sum to one, then within each (Bayesian) bootstrap sample, all
weights can be re-normalized with the new sum of gammas before the model is updated. At
any point in time, the current predictions are those aggregated across all bootstrap samples,
just as with batch bagging. If the model is fit with an ordinary lossless online algorithm, as
exists for classification trees (Utgoff et al., 1997), then the entire online Bayesian bagging
procedure is completely lossless relative to batch Bayesian bagging. Furthermore, since
batch Bayesian bagging gives the same mean results as ordinary batch bagging, online
Bayesian bagging also has the same expected results as ordinary batch bagging.
Pseudocode for online Bayesian bagging is
For i {1, . . . , n},
1. For m {1, . . . , M },
147
Lee and Clyde
(a) Draw a weight
(m)
i
from a Gamma(1, 1) random variable, associate weight
with x
i
, and add x
i
to X.
(b) Find predicted values G(X,
(m)
) (renormalizing weights if necessary).
2. The current bagging predictor is
1
M
M
m
=1
G
(X,
(m)
).
In step 1(b), the weights may need to be renormalized (by dividing by the sum of all
current weights) if the implementation requires weights that sum to one. We note that for
many models, such as classification trees, this renormalization is not a major issue; for a
tree, each split only depends on the relative weights of the observations at that node, so
nodes not involving the new observation will have the same ratio of weights before and
after renormalization and the rest of the tree structure will be unaffected; in practice, in
most implementations of trees (including that used in this paper), renormalization is not
necessary. We discuss the possibility of renormalization in order to be consistent with the
original presentation of the bootstrap and Bayesian bootstrap, and we note that ordinary
online bagging implicitly deals with this issue equivalently.
The computational requirements of Bayesian versus ordinary online bagging are comparable
. The procedures are quite similar, with the main difference being that the fitting
algorithm must handle non-integer weights for the Bayesian version. For models such as
trees, there is no significant additional computational burden for using non-integer weights.
Examples
We demonstrate the effectiveness of online Bayesian bagging using classification trees. Our
implementation uses the lossless online tree learning algorithms (ITI) of Utgoff et al. (1997)
(available at http://www.cs.umass.edu/lrn/iti/). We compared Bayesian bagging to
a single tree, ordinary batch bagging, and ordinary online bagging, all three of which were
done using the minimum description length criterion (MDL), as implemented in the ITI
code, to determine the optimal size for each tree. To implement Bayesian bagging, the code
was modified to account for weighted observations.
We use a generalized MDL to determine the optimal tree size at each stage, replacing all
counts of observations with the sum of the weights of the observations at that node or leaf
with the same response category. Replacing the total count directly with the sum of the
weights is justified by looking at the multinomial likelihood when written as an exponential
family in canonical form; the weights enter through the dispersion parameter and it is easily
seen that the unweighted counts are replaced by the sums of the weights of the observations
that go into each count. To be more specific, a decision tree typically operates with a
multinomial likelihood,
leaves
j
classes
k
p
n
jk
jk
,
where p
jk
is the true probability that an observation in leaf j will be in class k, and n
jk
is
the count of data points in leaf j in class k. This is easily re-written as the product over
all observations,
n
i
=1
p
i
where if observation i is in leaf j and a member of class k then
p
i
= p
jk
. For simplicity, we consider the case k = 2 as the generalization to larger k is
straightforward. Now consider a single point, y, which takes values 0 or 1 depending on
which class is it a member of. Transforming to the canonical parameterization, let =
p
1-p
,
148
Lossless Online Bayesian Bagging
where p is the true probability that y = 1. Writing the likelihood in exponential family
form gives exp
y + log
1
1+exp{}
a where a is the dispersion parameter, which would
be equal to 1 for a standard data set, but would be the reciprocal of the weight for that
observation in a weighted data set. Thus the likelihood for an observation y with weight
w is exp
y + log
1
1+exp{}
(1/w)
= p
wy
(1 - p)
w
(1-y)
and so returning to the full
multinomial, the original counts are simply replaced by the weighted counts. As MDL is a
penalized likelihood criterion, we thus use the weighted likelihood and replace each count
with a sum of weights. We note that for ordinary online bagging, using a single Poisson
weight K with our generalized MDL is exactly equivalent to including K copies of the data
point in the data set and using regular MDL.
Table 1 shows the data sets we used for classification problems, the number of classes in
each data set, and the sizes of their respective training and test partitions. Table 2 displays
the results of our comparison study. All of the data sets, except the final one, are available
online at http://www.ics.uci.edu/mlearn/MLRepository.html, the UCI Machine
Learning Repository. The last data set is described in Lee (2001). We compare the results
of training a single classification tree, ordinary batch bagging, online bagging, and Bayesian
online bagging (or equivalently Bayesian batch). For each of the bagging techniques, 100
bootstrap samples were used. For each data set, we repeated 1000 times the following procedure
: randomly choose a training/test partition; fit a single tree, a batch bagged tree, an
online bagged tree, and a Bayesian bagged tree; compute the misclassification error rate for
each fit. Table 2 reports the average error rate for each method on each data set, as well as
the estimated standard error of this error rate.
Size of
Size of
Number of
Training
Test
Data Set
Classes
Data Set
Data Set
Breast cancer (WI)
2
299
400
Contraceptive
3
800
673
Credit (German)
2
200
800
Credit (Japanese)
2
290
400
Dermatology
6
166
200
Glass
7
164
50
House votes
2
185
250
Ionosphere
2
200
151
Iris
3
90
60
Liver
3
145
200
Pima diabetes
2
200
332
SPECT
2
80
187
Wine
3
78
100
Mushrooms
2
1000
7124
Spam
2
2000
2601
Credit (American)
2
4000
4508
Table 1: Sizes of the example data sets
149
Lee and Clyde
Bayesian
Single
Batch
Online
Online/Batch
Data Set
Tree
Bagging
Bagging
Bagging
Breast cancer (WI)
0.055 (.020)
0.045 (.010)
0.045 (.010)
0.041 (.009)
Contraceptive
0.522 (.019)
0.499 (.017)
0.497 (.017)
0.490 (.016)
Credit (German)
0.318 (.022)
0.295 (.017)
0.294 (.017)
0.285 (.015)
Credit (Japanese)
0.155 (.017)
0.148 (.014)
0.147 (.014)
0.145 (.014)
Dermatology
0.099 (.033)
0.049 (.017)
0.053 (.021)
0.047 (.019)
Glass
0.383 (.081)
0.357 (.072)
0.361 (.074)
0.373 (.075)
House votes
0.052 (.011)
0.049 (.011)
0.049 (.011)
0.046 (.010)
Ionosphere
0.119 (.026)
0.094 (.022)
0.099 (.022)
0.096 (.021)
Iris
0.062 (.029)
0.057 (.026)
0.060 (.025)
0.058 (.025)
Liver
0.366 (.036)
0.333 (.032)
0.336 (.034)
0.317 (.033)
Pima diabetes
0.265 (.027)
0.250 (.020)
0.247 (.021)
0.232 (.017)
SPECT
0.205 (.029)
0.200 (.030)
0.202 (.031)
0.190 (.027)
Wine
0.134 (.042)
0.094 (.037)
0.101 (.037)
0.085 (.034)
Mushrooms
0.004 (.003)
0.003 (.002)
0.003 (.002)
0.003 (.002)
Spam
0.099 (.008)
0.075 (.005)
0.077 (.005)
0.077 (.005)
Credit (American)
0.350 (.007)
0.306 (.005)
0.306 (.005)
0.305 (.006)
Table 2: Comparison of average classification error rates (with standard error)
We note that in all cases, both online bagging techniques produce results similar to
ordinary batch bagging, and all bagging methods significantly improve upon the use of a
single tree. However, for smaller data sets (all but the last three), online/batch Bayesian
bagging typically both improves prediction performance and decreases prediction variability.
Discussion
Bagging is a useful ensemble learning tool, particularly when models sensitive to small
changes in the data are used. It is sometimes desirable to be able to use the data in
an online fashion. By operating in the Bayesian paradigm, we can introduce an online
algorithm that will exactly match its batch Bayesian counterpart. Unlike previous versions
of online bagging, the Bayesian approach produces a completely lossless bagging algorithm.
It can also lead to increased accuracy and decreased prediction variance for smaller data
sets.
Acknowledgments
This research was partially supported by NSF grants DMS 0233710, 9873275, and 9733013.
The authors would like to thank two anonymous referees for their helpful suggestions.
150
Lossless Online Bayesian Bagging
References
L. Breiman. Heuristics of instability in model selection. Technical report, University of
California at Berkeley, 1994.
L. Breiman. Bagging predictors. Machine Learning, 26(2):123140, 1996.
M. A. Clyde and H. K. H. Lee. Bagging and the Bayesian bootstrap. In T. Richardson and
T. Jaakkola, editors, Artificial Intelligence and Statistics 2001, pages 169174, 2001.
R. V. Hogg and A. T. Craig. Introduction to Mathematical Statistics. Prentice-Hall, Upper
Saddle River, NJ, 5th edition, 1995.
H. K. H. Lee. Model selection for neural network classification. Journal of Classification,
18:227243, 2001.
N. C. Oza and S. Russell. Online bagging and boosting. In T. Richardson and T. Jaakkola,
editors, Artificial Intelligence and Statistics 2001, pages 105112, 2001.
D. B. Rubin. The Bayesian bootstrap. Annals of Statistics, 9:130134, 1981.
P. E. Utgoff, N. C. Berkman, and J. A. Clouse. Decision tree induction based on efficient
tree restructuring. Machine Learning, 29:544, 1997.
151
| classification;Dirichlet Distribution;online bagging;bootstrap;Classification Tree;Bayesian Bootstrap;mean-squared prediction error;Bayesian bagging;bagging;lossless learning algorithm |
13 | A Machine Learning Based Approach for Table Detection on The Web | Table is a commonly used presentation scheme, especially for describing relational information. However, table understanding remains an open problem. In this paper, we consider the problem of table detection in web documents. Its potential applications include web mining, knowledge management , and web content summarization and delivery to narrow-bandwidth devices. We describe a machine learning based approach to classify each given table entity as either genuine or non-genuine . Various features re ecting the layout as well as content characteristics of tables are studied. In order to facilitate the training and evaluation of our table classi er, we designed a novel web document table ground truthing protocol and used it to build a large table ground truth database. The database consists of 1,393 HTML les collected from hundreds of di erent web sites and contains 11,477 leaf <TABLE> elements, out of which 1,740 are genuine tables. Experiments were conducted using the cross validation method and an F-measure of 95 : 89% was achieved. | INTRODUCTION
The increasing ubiquity of the Internet has brought about
a constantly increasing amount of online publications. As
a compact and e cient way to present relational information
, tables are used frequently in web documents. Since
tables are inherently concise as well as information rich, the
automatic understanding of tables has many applications including
knowledge management, information retrieval, web
Copyright is held by the author/owner(s).
WWW2002
, May 711, 2002, Honolulu, Hawaii, USA.
ACM 1-58113-449-5/02/0005.
mining, summarization, and content delivery to mobile devices
. The processes of table understanding in web documents
include table detection, functional and structural
analysis and nally table interpretation 6]. In this paper,
we concentrate on the problem of table detection. The web
provides users with great possibilities to use their own style
of communication and expressions. In particular, people use
the
<TABLE>
tag not only for relational information display
but also to create any type of multiple-column layout to
facilitate easy viewing, thus the presence of the
<TABLE>
tag does not necessarily indicate the presence of a relational
table. In this paper, we de ne
genuine
tables to be document
entities where a two dimensional grid is semantically
signi cant in conveying the logical relations among the cells
10]. Conversely,
Non-genuine
tables are document entities
where
<TABLE>
tags are used as a mechanism for grouping
contents into clusters for easy viewing only. Figure 1 gives
a few examples of genuine and non-genuine tables. While
genuine tables in web documents could also be created without
the use of
<TABLE>
tags at all, we do not consider such
cases in this article as they seem very rare from our experience
. Thus, in this study,
Table detection
refers to the
technique which classi es a document entity enclosed by the
<TABLE></TABLE>
tags as genuine or non-genuine tables.
Several researchers have reported their work on web table
detection 2, 10, 6, 14]. In 2], Chen
et al.
used heuristic
rules and cell similarities to identify tables. They tested
their table detection algorithm on 918 tables from airline information
web pages and achieved an F-measure of 86
:
50%.
Penn
et al.
proposed a set of rules for identifying genuinely
tabular information and news links in HTML documents
10]. They tested their algorithm on 75 web site front-pages
and achieved an F-measure of 88
:
05%. Yoshida
et al.
proposed
a method to integrate WWW tables according to the
category of objects presented in each table 14]. Their data
set contains 35,232 table tags gathered from the web. They
estimated their algorithm parameters using all of table data
and then evaluated algorithm accuracy on 175 of the tables.
The average F-measure reported in their paper is 82
:
65%.
These previous methods all relied on heuristic rules and were
only tested on a database that is either very small 10], or
highly domain speci c 2]. Hurst mentioned that a Naive
Bayes classi er algorithm produced adequate results but no
detailed algorithm and experimental information was provided
6].
We propose a new machine learning based approach for
242
Figure 1: Examples of genuine and non-genuine tables.
table detection from generic web documents. In particular
, we introduce a set of novel features which re ect the
layout as well as content characteristics of tables. These
features are used in classi ers trained on thousands of examples
. To facilitate the training and evaluation of the table
classi ers, we designed a novel web document table ground
truthing protocol and used it to build a large table ground
truth database. The database consists of 1,393 HTML les
collected from hundreds of di erent web sites and contains
11,477 leaf
<TABLE>
elements, out of which 1,740 are genuine
tables. Experiments on this database using the cross
validation method demonstrate signi cant performance improvements
over previous methods.
The rest of the paper is organized as follows. We describe
our feature set in Section 2, followed by a brief discussion
of the classi ers we experimented with in Section 3. In Section
4, we present a novel table ground truthing protocol
and explain how we built our database. Experimental results
are then reported in Section 5 and we conclude with
future directions in Section 6.
FEATURES FOR WEB TABLE DETECTION
Feature selection is a crucial step in any machine learning
based methods. In our case, we need to nd a combination
of features that together provide signi cant separation between
genuine and non-genuine tables while at the same time
constrain the total number of features to avoid the curse of
dimensionality. Past research has clearly indicated that layout
and content are two important aspects in table understanding
6]. Our features were designed to capture both of
these aspects. In particular, we developed 16 features which
can be categorized into three groups: seven layout features,
eight content type features and one word group feature. In
the rst two groups, we attempt to capture the global composition
of tables as well as the consistency within the whole
table and across rows and columns. The last feature looks at
words used in tables and is derived directly from the vector
space model commonly used in Information Retrieval.
Before feature extraction, each HTML document is rst
parsed into a document hierarchy tree using Java Swing
XML parser with W3C HTML 3.2 DTD 10]. A
<TABLE>
node is said to be a
leaf table
if and only if there are no
<TABLE>
nodes among its children 10]. Our experience indicates
that almost all genuine tables are leaf tables. Thus
in this study only leaf tables are considered candidates for
genuine tables and are passed on to the feature extraction
stage. In the following we describe each feature in detail.
2.1
Layout Features
In HTML documents, although tags like
<TR>
and
<TD>
(or
<TH>
) may be assumed to delimit table rows and table
cells, they are not always reliable indicators of the number
of rows and columns in a table. Variations can be caused
by spanning cells created using
<ROWSPAN>
and
<COLSPAN>
tags. Other tags such as
<BR>
could be used to move content
into the next row. Therefore to extract layout features
reliably one can not simply count the number of
<TR>
's and
<TD>
's. For this purpose, we maintain a matrix to record all
243
the cell spanning information and serve as a pseudo rendering
of the table. Layout features based on row or column
numbers are then computed from this matrix.
Given a table
T
, assuming its numbers of rows and columns
are
rn
and
cn
respectively, we compute the following layout
features:
Average number of columns, computed as the average
number of cells per row:
c
= 1
rn
rn
X
i
=1
c
i
where
c
i
is the number of cells in row
i
,
i
= 1
::: rn
Standard deviation of number of columns:
dC
=
v
u
u
t
1
rn
rn
X
i
=1
(
c
i
;
c
) (
c
i
;
c
)
Average number of rows, computed as the average
number of cells per column:
r
= 1
rn
cn
X
i
=1
r
i
where
r
i
is the number of cells in column
i
,
i
= 1
::: cn
Standard deviation of number of rows:
dR
=
v
u
u
t
1
cn
cn
X
i
=1
(
r
i
;
r
) (
r
i
;
r
)
:
Since the majority of tables in web documents contain
characters, we compute three more layout features based on
cell length in terms of number of characters:
Average overall cell length:
cl
=
1
en
P
en
i
=1
cl
i
, where
en
is the total number of cells in a given table and
cl
i
is
the length of cell
i
,
i
= 1
::: en
Standard deviation of cell length:
dCL
=
v
u
u
t
1
en
en
X
i
=1
(
cl
i
;
cl
) (
cl
i
;
cl
)
Average
Cumulative length consistency
,
CLC
.
The last feature is designed to measure the cell length consistency
along either row or column directions. It is inspired
by the fact that most genuine tables demonstrate certain
consistency either along the row or the column direction,
but usually not both, while non-genuine tables often show
no consistency in either direction. First, the average cumulative
within-row length consistency,
CLC
r
, is computed as
follows. Let the set of cell lengths of the cells from row
i
be
R
i
,
i
= 1
::: r
(considering only non-spanning cells):
1. Compute the mean cell length,
m
i
, for row
R
i
.
2. Compute cumulative length consistency within each
R
i
:
CLC
i
=
X
cl
2R
i
LC
cl
:
Here
LC
cl
is de ned as:
LC
cl
= 0
:
5
;
D
, where
D
=
min
fj
cl
;
m
i
j
=m
i
1
:
0
g
. Intuitively,
LC
cl
measures the
degree of consistency between
cl
and the mean cell
length, with
;
0
:
5 indicating extreme inconsistency and
0
:
5 indicating extreme consistency. When most cells
within
R
i
are consistent, the cumulative measure
CLC
i
is positive, indicating a more or less consistent row.
3. Take the average across all rows:
CLC
r
= 1
r
r
X
i
=1
CLC
i
:
After the within-row length consistency
CLC
r
is computed
, the within-column length consistency
CLC
c
is computed
in a similar manner. Finally, the overall cumulative
length consistency is computed as
CLC
= max(
CLC
r
CLC
c
).
2.2
Content Type Features
Web documents are inherently multi-media and has more
types of content than any traditional documents. For example
, the content within a
<TABLE>
element could include
hyperlinks, images, forms, alphabetical or numerical strings,
etc. Because of the relational information it needs to convey,
a genuine table is more likely to contain alpha or numerical
strings than, say, images. The content type feature was
designed to re ect such characteristics.
We de ne the set of content types
T
=
f
Image Form
Hyperlink Alphabetical Digit Empty Others
g
. Our content
type features include:
The histogram of content type for a given table. This
contributes 7 features to the feature set
Average
content type consistency
,
CTC
.
The last feature is similar to the cell length consistency feature
. First, within-row content type consistency
CTC
r
is
computed as follows. Let the set of cell type of the cells
from row
i
as
T
i
,
i
= 1
::: r
(again, considering only non-spanning
cells):
1. Find the dominant type,
DT
i
, for
T
i
.
2. Compute the cumulative type consistency with each
row
R
i
,
i
= 1
::: r
:
CTC
i
=
X
ct
2R
i
D
where
D
= 1 if
ct
is equal to
DT
i
and
D
=
;
1, otherwise
.
3. Take the average across all rows:
CTC
r
= 1
r
r
X
i
=1
CTC
i
The within-column type consistency is then computed in
a similar manner. Finally, the overall cumulative type consistency
is computed as:
CTC
= max(
CTC
r
CTC
c
).
244
2.3
Word Group Feature
If we treat each table as a \mini-document" by itself, table
classi cation can be viewed as a document categorization
problem with two broad categories: genuine tables and
non-genuine tables. We designed the word group feature to
incorporate word content for table classi cation based on
techniques developed in information retrieval 7, 13].
After morphing 11] and removing the infrequent words,
we obtain the set of words found in the training data,
W
.
We then construct weight vectors representing genuine and
non-genuine tables and compare that against the frequency
vector from each new incoming table.
Let
Z
represent the non-negative integer set. The following
functions are de ned on set
W
.
df
G
:
W
!
Z
, where
df
G
(
w
i
) is the number of genuine
tables which include word
w
i
,
i
= 1
:::
jW
j
tf
G
:
W
!
Z
, where
tf
G
(
w
i
) is the number of times
word
w
i
,
i
= 1
:::
jW
j
, appears in genuine tables
df
N
:
W
!
Z
, where
df
N
(
w
i
) is the number of non-genuine
tables which include word
w
i
,
i
= 1
:::
jW
j
tf
N
:
W
!
Z
, where
tf
N
(
w
i
) is the number of times
word
w
i
,
i
= 1
:::
jW
j
, appears in non-genuine tables.
tf
T
:
W
!
Z
, where
tf
T
(
w
i
) is the number of times
word
w
i
,
w
i
2
W
appears in a new test table.
To simplify the notations, in the following discussion, we
will use
df
Gi
,
tf
Gi
,
df
Ni
and
tf
Ni
to represent
df
G
(
w
i
),
tf
G
(
w
i
),
df
N
(
w
i
) and
tf
N
(
w
i
), respectively.
Let
N
G
,
N
N
be the number of genuine tables and non-genuine
tables in the training collection, respectively and let
C
= max(
N
G
N
N
). Without loss of generality, we assume
N
G
6
= 0 and
N
N
6
= 0. For each word
w
i
in
W
,
i
= 1
:::
jW
j
,
two weights,
p
G
i
and
p
N
i
are computed:
p
G
i
=
8
<
:
tf
Gi
log
(
df
G
i
N
G
N
N
df
N
i
+ 1) when
df
Ni
6
= 0
tf
Gi
log
(
df
G
i
N
G
C
+ 1)
when
df
Ni
= 0
p
N
i
=
8
<
:
tf
Ni
log
(
df
N
i
N
N
N
G
df
G
i
+ 1) when
df
Gi
6
= 0
tf
Ni
log
(
df
N
i
N
N
C
+ 1)
when
df
Gi
= 0
As can be seen from the formulas, the de nitions of these
weights were derived from the traditional
tf idf
measures
used in informational retrieval, with some adjustments made
for the particular problem at hand.
Given a new incoming table, let us denote the set including
all the words in it as
W
n
. Since
W
is constructed using
thousands of tables, the words that are present in both
W
and
W
n
are only a small subset of
W
. Based on the vector
space model, we de ne the similarity between weight vectors
representing genuine and non-genuine tables and the
frequency vector representing the incoming table as the corresponding
dot products. Since we only need to consider the
words that are present in both
W
and
W
n
, we rst compute
the
e ective word set
:
W
e
=
W
\
W
n
. Let the words in
W
e
be represented as
w
m
k
, where
m
k
k
= 1
:::
jW
e
j
, are
indexes to the words from set
W
=
f
w
1
w
2
::: w
jW
j
g
. we
de ne the following vectors:
Weight vector representing the genuine table group:
!
G
S
=
p
G
m
1
U p
G
m
2
U
p
G
m
jW
e
j
U
!
where
U
is the cosine normalization term:
U
=
v
u
u
t
jW
e
j
X
k
=1
p
G
m
k
p
G
m
k
:
Weight vector representing the non-genuine table group:
!
N
S
=
p
N
m
1
V p
N
m
2
V
p
N
m
jW
e
j
V
!
where
V
is the cosine normalization term:
V
=
v
u
u
t
jW
e
j
X
k
=1
p
N
m
k
p
N
m
k
:
Frequency vector representing the new incoming table:
!
I
T
=
tf
Tm
1
tf
Tm
2
tf
Tm
jW
e
j
:
Finally, the word group feature is de ned as the ratio of
the two dot products:
wg
=
8
>
>
>
<
>
>
>
:
!
I
T
!
G
S
!
I
T
!
N
S
when
!
I
T
!
N
S
6
= 0
1
when
!
I
T
!
G
S
= 0 and
!
I
T
!
N
S
= 0
10
when
!
I
T
!
G
S
6
= 0 and
!
I
T
!
N
S
= 0
CLASSIFICATION SCHEMES
Various classi cation schemes have been widely used in
document categorization as well as web information retrieval
13, 8]. For the table detection task, the decision tree classi-er
is particularly attractive as our features are highly non-homogeneous
. We also experimented with Support Vector
Machines (SVM), a relatively new learning approach which
has achieved one of the best performances in text categorization
13].
3.1
Decision Tree
Decision tree learning is one of the most widely used and
practical methods for inductive inference. It is a method
for approximating discrete-valued functions that is robust
to noisy data.
Decision trees classify an instance by sorting it down the
tree from the root to some leaf node, which provides the classi
cation of the instance. Each node in a discrete-valued decision
tree speci es a test of some attribute of the instance,
and each branch descending from that node corresponds to
one of the possible values for this attribute. Continuous-valued
decision attributes can be incorporated by dynami-cally
de ning new discrete-valued attributes that partition
the continuous attribute value into a discrete set of intervals
9].Animplementationof thecontinuous-valueddecision tree
described in 4] was used for our experiments. The decision
tree is constructed using a training set of feature vectors with
true class labels. At each node, a discriminant threshold
245
is chosen such that it minimizes an impurity value. The
learned discriminant function splits the training subset into
two subsets and generates two child nodes. The process is
repeated at each newly generated child node until a stopping
condition is satis ed, and the node is declared as a terminal
node based on a majority vote. The maximum impurity
reduction, the maximum depth of the tree, and minimum
number of samples are used as stopping conditions.
3.2
SVM
Support Vector Machines (SVM) are based on the
Structural
Risk Management
principle from computational learning
theory 12]. The idea of structural risk minimization
is to nd a hypothesis
h
for which the lowest true error is
guaranteed. The true error of
h
is the probability that
h
will make an error on an unseen and randomly selected test
example.
The SVM method is de ned over a vector space where the
goal is to nd a decision surface that best separates the data
points in two classes. More precisely, the decision surface by
SVM for linearly separable space is a hyperplane which can
be written as
~w ~x
;
b
= 0
where
~
x
is an arbitrary data point and the vector
~w
and
the constant
b
are learned from training data. Let
D
=
(
y
i
~x
i
) denote the training set, and
y
i
2
f
+1
;
1
g
be the
classi cation for
~x
i
, the SVM problem is to nd
~w
and
b
that satis es the following constraints:
~w ~x
i
;
b
+1
for y
i
= +1
~w ~x
i
;
b
;
1
for y
i
=
;
1
while minimizing the vector 2-norm of
~w
.
The SVM problem in linearly separable cases can be e ciently
solved using quadratic programming techniques, while
the non-linearly separable cases can be solved by either introducing
soft margin hyperplanes, or by mapping the original
data vectors to a higher dimensional space where the
data points become linearly separable 12, 3].
One reason why SVMs are very powerful is that they are
very universal learners. In their basic form, SVMs learn linear
threshold functions. Nevertheless, by a simple \plug-in"
of an appropriate kernel function, they can be used to learn
polynomial classi ers, radial basis function (RBF) networks,
three-layer sigmoid neural nets, etc. 3].
For our experiments, we used the
SV M
light
system implemented
by Thorsten Joachims.
1
DATA COLLECTION AND TRUTHING
Since there are no publicly available web table ground
truth database, researchers tested their algorithms in di erent
data sets in the past 2, 10, 14]. However, their data
sets either had limited manually annotated table data (
e.g.
,
918 table tags in 2], 75 HTML pages in 10], 175 manually
annotated table tags in 14]), or were collected from some
speci c domains (
e.g.
, a set of tables selected from airline
information pages were used in 2]). To develop our machine
learning based table detection algorithm, we needed to build
a general web table ground truth database of signi cant size.
1
http://svmlight.joachims.org
4.1
Data Collection
Instead of working within a speci c domain, our goal of
data collection was to get tables of as many di erent varieties
as possible from the web. To accomplish this, we composed
a set of key words likely to indicate documents containing
tables and used those key words to retrieve and download
web pages using the Google search engine. Three directories
on Google were searched: the business directory and
news directory using key words:
ftable,
stock,
bonds,
figure,
schedule,
weather,
score,
service,
results,
valueg
, and the science directory using key words
ftable,
results,
valueg
. A total of 2,851 web pages were down-loaded
in this manner and we ground truthed 1,393 HTML
pages out of these (chosen randomly among all the HTML
pages). These 1,393 HTML pages from around 200 web sites
comprise our database.
4.2
Ground Truthing
There has been no previous report on how to systemati-cally
generate web table ground truth data. To build a large
web table ground truth database, a simple, exible and complete
ground truth protocol is required. Figure 4.2(a) shows
the diagram of our ground truthing procedure. We created
a new Document Type De nition(DTD) which is a super-set
of W3C HTML 3.2 DTD. We added three attributes for
<TABLE>
element, which are \tabid", \genuine table" and
\table title". The possible value of the second attribute is
yes
or
no
and the value of the rst and third attributes is a
string. We used these three attributes to record the ground
truth of each leaf
<TABLE>
node. The bene t of this design
is that the ground truth data is inside HTML le format.
We can use exactly the same parser to process the ground
truth data.
We developed a graphical user interface for web table
ground truthing using the Java 1] language. Figure 4.2(b)
is a snapshot of the interface. There are two windows. After
reading an HTML le, the hierarchy of the HTML le is
shown in the left window. When an item is selected in the
hierarchy, the HTML source for the selected item is shown
in the right window. There is a panel below the menu bar.
The user can use the radio button to select either genuine
table or non-genuine table. The text window is used to input
table title.
4.3
Database Description
Our nal table ground truth database consists of 1,393
HTML pages collected from around 200 web sites. There
are a total of 14,609
<TABLE>
nodes, including 11,477 leaf
<TABLE>
nodes. Out of the 11,477 leaf
<TABLE>
nodes,
1,740 are genuine tables and 9,737 are non-genuine tables.
Not every genuine table has its title and only 1,308 genuine
tables have table titles. We also found at least 253 HTML
les have unmatched
<TABLE>
,
</TABLE>
pairs or wrong
hierarchy, which demonstrates the noisy nature of web documents
EXPERIMENTS
A hold-out method is used to evaluate our table classi-er
. We randomly divided the data set into nine parts.
Each classi er was trained on eight parts and then tested
on the remaining one part. This procedure was repeated
nine times, each time with a di erent choice for the test
246
Parser
Adding attributes
HTML with attributes and unique
index to each table(ground truth)
Validation
HTML File
Hierarchy
(a)
(b)
Figure 2: (a) The diagram of ground truthing procedure (b) A snapshot of the ground truthing software.
part. Then the combined nine part results are averaged to
arrive at the overall performance measures 4].
For the layout and content type features, this procedure
is straightforward. However it is more complicated for the
word group feature training. To compute
w
g
for training
samples, we need to further divide the training set into two
groups, a larger one (7 parts) for the computation of the
weights
p
G
i
and
p
N
i
,
i
= 1
:::
jW
j
, and a smaller one (1
part) for the computation of the vectors
!
G
S
,
!
N
S
, and
!
I
T
.
This partition is again rotated to compute
w
g
for each table
in the training set.
Table 1: Possible true- and detected-state combinations
for two classes.
True
Class
Assigned Class
genuine table non-genuine table
genuine table
N
gg
N
gn
non-genuine table
N
ng
N
nn
The output of each classi er is compared with the ground
truth and a contingency table is computed to indicate the
number of a particular class label that are identi ed as members
of one of two classes. The rows of the contingency table
represent the true classes and the columns represent the assigned
classes. The cell at row
r
and column
c
is the number
of tables whose true class is
r
while its assigned class is
c
.
The possible true- and detected-state combination is shown
in Table 1. Three performance measures
Recall Rate(R)
,
Precision Rate(P)
and
F-measure(F)
are computed as follows
:
R
=
N
gg
N
gg
+
N
gn
P
=
N
gg
N
gg
+
N
ng
F
=
R
+
P
2
:
For comparison among di erent features and learning algorithms
we report the performance measures when the best
F-measure is achieved. First, the performance of various feature
groups and their combinations were evaluated using the
decision tree classi er. The results are given in Table 2.
Table 2: Experimental results using various feature
groups and the decision tree classi er.
L
T
LT
LTW
R (%) 87.24 90.80 94.20
94.25
P (%) 88.15 95.70 97.27
97.50
F (%) 87.70 93.25 95.73
95.88
L:
La
y
out
only
.
T:
Con
ten
t
t
yp
e
only
.
L
T:
La
y
out
and
con
ten
t
t
yp
e.
L
TW:
La
y
out,
con
ten
t
t
yp
e
and
w
ord
group.
As seen from the table, content type features performed
better than layout features as a single group, achieving an
F-measure of 93
:
25%. However, when the two groups were
combined the F-measure was improved substantially to 95
:
73%,
recon rming the importance of combining layout and content
features in table detection. The addition of the word
group feature improved the F-measure slightly more to 95
:
88%.
Table 3 compares the performances of di erent learning
algorithms using the full feature set. The leaning algorithms
tested include the decision tree classi er and the SVM al-247
gorithm with two di erent kernels { linear and radial basis
function (RBF).
Table 3: Experimental results using di erent learning
algorithms.
Tree SVM (linear) SVM (RBF)
R (%) 94.25
93.91
95.98
P (%) 97.50
91.39
95.81
F (%) 95.88
92.65
95.89
As seen from the table, for this application the SVM with
radial basis function kernel performed much better than the
one with linear kernel. It achieved an F measure of 95
:
89%,
comparable to the 95
:
88% achieved by the decision tree classi
er.
Figure 3 shows two examples of correctly classi ed tables,
where Figure 3(a) is a genuine table and Figure 3(b) is a
non-genuine table.
Figure 4 shows a few examples where our algorithm failed.
Figure 4(a) was misclassi ed as a non-genuine table, likely
because its cell lengths are highly inconsistent and it has
many hyperlinks which is unusual for genuine tables. The
reason why Figure 4(b) was misclassi ed as non-genuine is
more interesting. When we looked at its HTML source code,
we found it contains only two
<TR>
tags. All text strings
in one rectangular box are within one
<TD>
tag. Its author
used
<p>
tags to put them in di erent rows. This points
to the need for a more carefully designed pseudo-rendering
process. Figure 4(c) shows a non-genuine table misclassi-ed
as genuine. A close examination reveals that it indeed
has good consistency along the row direction. In fact, one
could even argue that this is indeed a genuine table, with
implicit row headers of
Title, Name, Company A liation
and
Phone Number
. This example demonstrates one of the
most di cult challenges in table understanding, namely the
ambiguous nature of many table instances (see 5] for a more
detailed analysis on that). Figure 4(d) was also misclassi-ed
as a genuine table. This is a case where layout features
and the kind of shallow content features we used are not
enough | deeper semantic analysis would be needed in order
to identify the lack of logical coherence which makes it
a non-genuine table.
For comparison, we tested the previously developed rule-based
system 10] on the same database. The initial results
(shown in Table 4 under \Original Rule Based") were
very poor. After carefully studying the results from the
initial experiment we realized that most of the errors were
caused by a rule imposing a hard limit on cell lengths in genuine
tables. After deleting that rule the rule-based system
achieved much improved results (shown in Table 4 under
\Modi ed Rule Based"). However, the proposed machine
learning based method still performs considerably better in
comparison. This demonstrates that systems based on handcrafted
rules tend to be brittle and do not generalize well.
In this case, even after careful manual adjustment in a new
database, it still does not work as well as an automatically
trained classi er.
(a)
(b)
Figure 3: Examples of correctly classi ed tables.
(a): a genuine table (b): a non-genuine table.
Table 4: Experimental results of a previously developed
rule based system.
Original Rule Based Modi ed Rule Based
R (%)
48.16
95.80
P (%)
75.70
79.46
F (%)
61.93
87.63
248
(a)
(b)
(c)
(d)
Figure 4: Examples of misclassi ed tables. (a) and (b): Genuine tables misclassi ed as non-genuine (c) and
(d): Non-genuine tables misclassi ed as genuine.
A direct comparison to other previous results 2, 14] is
not possible currently because of the lack of access to their
system. However, our test database is clearly more general
and far larger than the ones used in 2] and 14], while our
precision and recall rates are both higher.
CONCLUSION AND FUTURE WORK
Table detection in web documents is an interesting and
challenging problem with many applications. We present a
machine learning based table detection algorithm for HTML
documents. Layout features, content type features and word
group features were used to construct a novel feature set.
Decision tree and SVM classi ers were then implemented
and tested in this feature space. We also designed a novel table
ground truthing protocol and used it to construct a large
web table ground truth database for training and testing.
Experiments on this large database yielded very promising
results.
Our future work includes handling more di erent HTML
styles in pseudo-rendering, detecting table titles of the rec-ognized
genuine tables and developing a machine learning
based table interpretation algorithm. We would also like to
investigate ways to incorporate deeper language analysis for
both table detection and interpretation.
ACKNOWLEDGMENT
We would like to thank Kathie Shipley for her help in
collecting the web pages, and Amit Bagga for discussions on
vector space models.
REFERENCES
1] M. Campione, K. Walrath, and A. Huml. The
java(tm) tutorial: A short course on the basics (the
java(tm) series).
2] H.-H. Chen, S.-C. Tsai, and J.-H. Tsai. Mining tables
from large scale html texts. In
Proc. 18th
International Conference on Computational
Linguistics
, Saabrucken, Germany, July 2000.
3] C. Cortes and V. Vapnik. Support-vector networks.
Machine Learning
, 20:273{296, August 1995.
4] R. Haralick and L. Shapiro.
Computer and Robot
Vision
, volume 1. Addison Wesley, 1992.
5] J. Hu, R. Kashi, D. Lopresti, G. Nagy, and
G. Wilfong. Why table ground-truthing is hard. In
Proc. 6th International Conference on Document
Analysis and Recognition (ICDAR01)
, pages 129{133,
Seattle, WA, USA, September 2001.
6] M. Hurst. Layout and language: Challenges for table
understanding on the web. In
Proc. 1st International
Workshop on Web Document Analysis
, pages 27{30,
Seattle, WA, USA, September 2001.
7] T. Joachims. A probabilistic analysis of the rocchio
algorithm with t df for text categorization. In
Proc.
14th International Conference on Machine Learning
,
pages 143{151, Morgan Kaufmann, 1997.
8] A. McCallum, K. Nigam, J. Rennie, and K. Seymore.
Automating the construction of internet portals with
machine learning. In
Information Retrieval Journal
,
volume 3, pages 127{163, Kluwer, 2000.
249
9] T. M. Mitchell.
Machine Learning
. McGraw-Hill, 1997.
10] G. Penn, J. Hu, H. Luo, and R. McDonald. Flexible
web document analysis for delivery to narrow-bandwidth
devices. In
Proc. 6th International
Conference on Document Analysis and Recognition
(ICDAR01)
, pages 1074{1078, Seattle, WA, USA,
September 2001.
11] M. F. Porter. An algorithm for su x stripping.
Program
, 14(3):130{137, 1980.
12] V. N. Vapnik.
The Nature of Statistical Learning
Theory
, volume 1. Springer, New York, 1995.
13] Y. Yang and X. Liu. A re-examination of text
categorization methods. In
Proc. SIGIR'99
, pages
42{49, Berkeley, California, USA, August 1999.
14] M. Yoshida, K. Torisawa, and J. Tsujii. A method to
integrate tables of the world wide web. In
Proc. 1st
International Workshop on Web Document Analysis
,
pages 31{34, Seattle, WA, USA, September 2001.
250 | Table detection;table ground truthing protocol;Layout Analysis;classifers;word group;presentation;Information Retrieval;Algorithms;Support Vector Machine;classifcation schemes;Machine Learning;Table Detection;Layout;machine learning based approach;content type;Decision tree;HTML document |
130 | Low Latency Photon Mapping Using Block Hashing | For hardware accelerated rendering, photon mapping is especially useful for simulating caustic lighting effects on non-Lambertian surfaces. However, an efficient hardware algorithm for the computation of the k nearest neighbours to a sample point is required. Existing algorithms are often based on recursive spatial subdivision techniques, such as kd-trees. However, hardware implementation of a tree-based algorithm would have a high latency, or would require a large cache to avoid this latency on average. We present a neighbourhood-preserving hashing algorithm that is low-latency and has sub-linear access time. This algorithm is more amenable to fine-scale parallelism than tree-based recursive spatial subdivision, and maps well onto coherent block-oriented pipelined memory access. These properties make the algorithm suitable for implementation using future programmable fragment shaders with only one stage of dependent texturing. | Introduction
Photon mapping, as described by Jensen
, is a technique
for reconstructing the incoming light field at surfaces everywhere
in a scene from sparse samples generated by forward
light path tracing. In conjunction with path tracing, photon
mapping can be used to accelerate the computation of both
diffuse and specular global illumination . It is most effective
for specular or glossy reflectance effects, such as caustics
.
The benefits of migrating photo-realistic rendering techniques
towards a real-time, hardware-assisted implementation
are obvious. Recent work has shown that it is possible
to implement complex algorithms, such as ray-tracing,
using the programmable features of general-purpose hardware
accelerators
and/or specialised hardware
. We are
interested in hardware support for photon mapping: specifically
, the application of photon maps to the direct visualisation
of caustics on non-Lambertian surfaces, since diffuse
global illumination effects are probably best handled in a
real-time renderer using alternative techniques such as irradiance
.
Central to photon mapping is the search for the set of photons
nearest to the point being shaded. This is part of the interpolation
step that joins light paths propagated from light
sources with rays traced from the camera during rendering,
and it is but one application of the well-studied k nearest
neighbours (kNN) problem.
Jensen uses the kd-tree
,
data structure to find these nearest
photons. However, solving the kNN problem via kd-trees
requires a search that traverses the tree. Even if the tree is
stored as a heap, traversal still requires random-order memory
access and memory to store a stack. More importantly,
a search-path pruning algorithm, based on the data already
examined, is required to avoid accessing all data in the tree.
This introduces serial dependencies between one memory
lookup and the next. Consequently, a hardware implementation
of a kd-tree-based kNN solution would either have high
latency, or would require a large cache to avoid such latency.
In either case a custom hardware implementation would be
required. These properties motivated us to look at alternatives
to tree search.
Since photon mapping is already making an approximation
by using kNN interpolation, we conjectured that an
approximate kNN (AkNN) solution should suffice so long
as visual quality is maintained. In this paper we investigate
a hashing-based AkNN solution in the context of high-c
The Eurographics Association 2002.
Ma and McCool / Low Latency Photon Mapping Using Block Hashing
performance hardware-based (specifically, programmable
shader-based) photon mapping. Our major contribution is
an AkNN algorithm that has bounded query time, bounded
memory usage, and high potential for fine-scale parallelism.
Moreover, our algorithm results in coherent, non-redundant
accesses to block-oriented memory. The results of one memory
lookup do not affect subsequent memory lookups, so accesses
can take place in parallel within a pipelined memory
system. Our algorithm is based on array access, and is
more compatible with current texture-mapping capabilities
than tree-based algorithms. Furthermore, any photon mapping
acceleration technique that continues to rely on a form
of kNN (such as irradiance caching
) can still benefit from
our technique.
In Section
2
, we first review previous work on the kNN
and the approximate k-nearest neighbour (AkNN) problems.
Section
3
describes the context and assumptions of our research
and illustrates the basic hashing technique used in
our algorithm. Sections
4
and
5
describe the details of our
algorithm. Section
6
presents numerical, visual quality, and
performance results. Section
7
discusses the mapping of the
algorithm onto a shader-based implementation. Finally, we
conclude in Section
8
.
Previous Work
Jensen's book
25
covers essentially all relevant previous work
leading up to photon mapping. Due to space limitations, we
will refer the reader to that book and focus our literature review
on previous approaches to the kNN and AkNN problems
.
Any non-trivial algorithm that claims to be able to solve
the kNN problem faster than brute-force does so by reducing
the number of candidates that have to be examined when
computing the solution set. Algorithms fall into the following
categories: recursive spatial subdivision, point location,
neighbourhood graphs, and hashing.
Amongst algorithms based on recursive spatial subdivision
, the kd-tree
5
method is the approach commonly used
to solve the kNN problem
14
. An advantage of the kd-tree is
that if the tree is balanced it can be stored as a heap in a
single array. While it has been shown that kd-trees have optimal
expected-time complexity
6
, in the worst case finding
the k nearest neighbours may require an exhaustive search of
the entire data structure via recursive decent. This requires a
stack the same size as the depth of the tree. During the recursion
, a choice is made of which subtree to search next based
on a test at each internal node. This introduces a dependency
between one memory access and the next and makes it hard
to map this algorithm into high-latency pipelined memory
accesses.
Much work has been done to find methods to optimise
the kd-tree method of solving the kNN and AkNN problems.
See Christensen
26
, Vanco et al.
44
, Havran
19
, and Sample
et al.
39
. Many other recursive subdivision-based techniques
have also been proposed for the kNN and AkNN problems,
including kd-B-trees
36
, BBD-trees
4
, BAR-trees
9
, Principal-Axis
Trees
33
, the R-tree family of data structures
27
, and
ANN-trees
30
. Unfortunately, all schemes based on recursive
search over a tree share the same memory dependency problem
as the kd-tree.
The second category of techniques are based on building
and searching graphs that encode sample-adjacency information
. The randomised neighbourhood graph approach
3
builds and searches an approximate local neighbourhood
graph. Eppstein et al.
11
investigated the fundamental properties
of a nearest neighbour graph. Jaromczyk and Toussaint
surveyed data structures and techniques based on Relative
Neighbourhood Graphs
23
. Graph-based techniques tend to
have the same difficulties as tree-based approaches: searching
a graph also involves stacks or queues, dependent memory
accesses, and pointer-chasing unsuited to high-latency
pipelined memory access.
Voronoi diagrams can be used for optimal 1-nearest
neighbour searches in 2D and 3D
10
. This and other point-location
based techniques
17
for solving nearest neighbour
problems do not need to calculate distances between the
query point and the candidates, but do need another data
structure (like a BSP tree) to test a query point for inclusion
in a region.
Hashing approaches to the kNN and AkNN problems
have recently been proposed by Indyk et al.
20
,
21
and Gionis
et al.
16
. These techniques have the useful property that
multi-level dependent memory lookups are not required. The
heart of these algorithms are simple hash functions that preserve
spatial locality, such as the one proposed by Linial
and Sasson
31
, and Gionis et al.
16
. We base our technique on
the latter. The authors also recognise recent work by Wald
et al.
45
on real-time global illumination techniques where
a hashing-based photon mapping technique was apparently
used (but not described in detail).
Numerous surveys and books
1
,
2
,
15
,
42
,
43
,
17
provide further
information on this family of problems and data structures
developed to solve them.
Context
We have developed a novel technique called Block Hashing
(BH) to solve the approximate kNN (AkNN) problem in the
context of, but not limited to, photon mapping.
Our algorithm uses hash functions to categorise photons
by their positions. Then, a kNN query proceeds by deciding
which hash bucket is matched to the query point and retrieving
the photons contained inside the hash bucket for analysis
. One attraction of the hashing approach is that evaluation
of hash functions takes constant time. In addition, once we
have the hash value, accessing data we want in the hash table
takes only a single access. These advantages permit us to
c The Eurographics Association 2002.
90
Ma and McCool / Low Latency Photon Mapping Using Block Hashing
avoid operations that are serially dependent on one another,
such as those required by kd-trees, and are major stepping
stones towards a low-latency shader-based implementation.
Our technique is designed under two assumptions on the
behaviour of memory systems in (future) accelerators. First,
we assume that memory is allocated in fixed-sized blocks .
Second, we assume that access to memory is via burst transfer
of blocks that are then cached. Under this assumption,
if any part of a fixed-sized memory block is "touched", access
to the rest of this block will be virtually zero-cost. This
is typical even of software implementations on modern machines
which rely heavily on caching and burst-mode transfers
from SDRAM or RDRAM. In a hardware implementation
with a greater disparity between processing power and
memory speed, using fast block transfers and caching is even
more important. Due to these benefits, in BH all memory
used to store photon data is broken into fixed-sized blocks.
3.1. Locality-Sensitive Hashing
Since our goal is to solve the kNN problem as efficiently as
possible in a block-oriented cache-based context, our hashing
technique requires hash functions that preserve spatial
neighbourhoods. These hash functions take points that are
close to each other in the domain space and hash them close
to each other in hash space. By using such hash functions,
photons within the same hash bucket as a query point can be
assumed to be close to the query point in the original domain
space. Consequently, these photons are good candidates for
the kNN search. More than one such scheme is available; we
chose to base our technique on the Locality-Sensitive Hashing
(LSH) algorithm proposed by Gionis et al.
16
, but have
added several refinements (which we describe in Section
4
).
The hash function in LSH groups one-dimensional real
numbers in hash space by their spatial location. It does so
by partitioning the domain space and assigning a unique
hash value to each partition. Mathematically, let
T =
{t
i
| 0 i P} be a monotonically increasing sequence
of P + 1 thresholds between 0 and 1. Assume t
0
= 0 and
t
P
= 1, so there are P
- 1 degrees of freedom in this sequence
. Define a one-dimensional Locality-Sensitive Hash
Function h
T
: [0, 1]
{0 . . . P - 1} to be h
T
(t) = i, where
t
i
t < t
i+1
. In other words, the hash value i can take
on P different values, one for each "bucket" defined by the
threshold pair [t
i
,t
i+1
). An example is shown in Figure
1
.
t
1
t
2
t
3
t
4
t
0
Figure 1: An example of h
T
. The circles and boxes represent
values to be hashed, while the vertical lines are the thresholds
in
T . The boxes lie between t
1
and t
2
, thus they are given
a hash value of 1.
The function h
T
can be interpreted as a monotonic non-uniform
quantisation of spatial position, and is characterised
by P and the sequence
T . It is important to note that h
T
gives
each partition of the domain space delineated by
T equal
representation in hash space. Depending on the location of
the thresholds, h
T
will contract some parts of the domain
space and expand other parts. If we rely on only a single
hash table to classify a data set, a query point will only hash
to a single bucket within this table, and the bucket may represent
only a subset of the true neighbourhood we sought.
Therefore, multiple hash tables with different thresholds are
necessary for the retrieval of a more complete neighbourhood
around the query point (See Figure
2
.)
h
T1
h
T3
h
T2
t
1
t
2
t
3
t
4
t
0
t
4
t
0
t
4
t
0
t
1
t
1
t
2
t
2
t
3
t
3
Figure 2: An example of multiple hash functions classifying
a dataset. The long vertical line represents the query
value. The union of results multiple hash tables with different
thresholds represents a more complete neighbourhood.
To deal with n-dimensional points, each hash table will
have one hash function per dimension. Each hash function
generates one hash value per coordinate of the point (See
Figure
3
.) The final hash value is calculated by
n
-1
i=0
h
i
P
i
,
where h
i
are the hash values and P is the number of thresholds
. If P were a power of two, then this amounts to concatenating
the bits. The only reason we use the same number of
thresholds for each dimension is simplicity. It is conceivable
that a different number of thresholds could be used for
each dimension to better adapt to the data. We defer the discussion
of threshold generation and query procedures until
Sections
4.2
and
4.4
, respectively.
h
x
h
y
P
Figure 3: Using two hash functions to handle
a 2D point. Each hash function will be
used to hash one coordinate.
LSH is very similar to grid files
34
. However, the grid file
was specifically designed to handle dynamic data. Here, we
assume that the data is static during the rendering pass. Also,
the grid file is more suitable for range searches than it is for
solving the kNN problem.
3.2. Block-Oriented Memory Model
It has been our philosophy that hardware implementations of
algorithms should treat off-chip memory the same way software
implementations treat disk: as a relatively slow, "out-of
-core", high-latency, block-oriented storage device. This
c The Eurographics Association 2002.
91
Ma and McCool / Low Latency Photon Mapping Using Block Hashing
analogy implies that algorithms and data structures designed
to optimise for disk access are potentially applicable to hardware
design. It also drove us to employ fixed-sized blocks to
store the data involved in the kNN search algorithm, which
are photons in the context of this application.
In our prototype software implementation of BH, each
photon is stored in a structure similar to Jensen's "extended"
photon representation
25
. As shown in Figure
4
, each component
of the 3D photon location is represented by a 32-bit
fixed-point number. The unit vectors representing incoming
direction ^d and surface normal ^n are quantised to 16-bit
values using Jensen's polar representation. Photon power is
stored in four channels using sixteen-bit floating point numbers
. This medium-precision signed representation permits
other AkNN applications beyond that of photon mapping.
Four colour channels are also included to better match the
four-vectors supported in fragment shaders. For the photon
mapping application specifically, our technique is compatible
with the replacement of the four colour channels with a
Ward RGBE colour representation
46
. Likewise, another implementation
might use a different representation for the normal
and incident direction unit vectors.
|
32
|
x
y
z
^
d
^
n
c
1
c
2
c
3
c
4
| 16 |
Figure 4: Representation of a photon record.
The 32-bit values (x, y, z) denote the position of
a photon and are used as keys. Two quantised
16-bit unit vectors ^d, ^n and four 16-bit floating
point values are carried as data.
All photon records are stored in fixed-sized memory
blocks. BH uses a 64 32-bit-word block size, chosen to
permit efficient burst-mode transfers over a wide bus to
transaction-oriented DRAM. Using a 128-bit wide path to
DDR SDRAM, for instance, transfer of this block would
take eight cycles, not counting the overhead of command
cycles to specify the operation and the address. Using next-generation
QDR SDRAM this transfer would take only four
cycles (or eight on a 64-bit bus, etc.)
Our photon representation occupies six 32-bit words.
Since photon records are not permitted to span block boundaries
, ten photons will fit into a 64-word block with four
words left over. Some of this extra space is used in our implementation
to record how many photons are actually stored in
each block. For some variants of the data structures we describe
, this extra space could also be used for flags or pointers
to other blocks. It might be possible or desirable in other
implementations to support more or fewer colour channels,
or channels with greater or lesser precision, in which case
some of our numerical results would change.
Block Hashing
Block Hashing (BH) contains a preprocessing phase and a
query phase. The preprocessing phase consists of three steps.
After photons have been traced into the scene, the algorithm
organises the photons into fixed-sized memory blocks, creates
a set of hash tables, and inserts photon blocks into the
hash tables.
In the second phase, the hash tables will be queried for a
set of candidate photons from which the k nearest photons
will be selected for each point in space to be shaded by the
renderer.
4.1. Organizing Photons into Blocks
Due to the coherence benefits associated with block-oriented
memory access, BH starts by grouping photons and storing
them into fixed-sized memory blocks. However, these benefits
are maximised when the photons within a group are close
together spatially.
We chose to use the Hilbert curve
13
to help group photons
together. The advantage of the Hilbert curve encoding of position
is that points mapped near each other on the Hilbert
curve are guaranteed to be within a certain distance of each
other in the original domain
22
. Points nearby in the original
domain space have a high probability of being nearby on the
curve, although there is a non-zero probability of them being
far apart on the curve. If we sort photons by their Hilbert
curve order before packing them into blocks, then the photons
in each block will have a high probability of being spatially
coherent. Each block then corresponds to an interval of
the Hilbert curve, which in turn covers some compact region
of the domain (see Figure
7
a). Each region of domain space
represented by the blocks is independent, and regions do not
overlap.
BH sorts photons and inserts them into a B
+
-tree
8
using
the Hilbert curve encoding of the position of each photon as
the key. This method of spatially grouping points was first
proposed by Faloutsos and Rong
12
for a different purpose.
Since a B
+
-tree stores photon records only at leaves, with
a compatible construction the leaf nodes of the B
+
-tree can
serve as the photon blocks used in the later stages of BH.
One advantage of using a B
+
-tree for sorting is that insertion
cost is bounded: the tree is always balanced, and in the
worst case we may have to split h nodes in the tree, when
the height of the tree is h. Also, B
+
-trees are optimised for
block-oriented storage, as we have assumed.
The B
+
-tree used by BH has index and leaf nodes that
are between half full to completely full. To minimise the final
number of blocks required to store the photons, the leaf
nodes can be compacted (see Figure
5
.) After the photons
are sorted and compacted, the resulting photon blocks are
ready to be used by BH, and the B
+
-tree index and any leaf
nodes that are made empty by compaction are discarded. If
the complete set of photons is known a priori, the compact
B
+
-tree
37
for static data can be used instead. This data structure
maintains full nodes and avoids the extra compaction
step.
c The Eurographics Association 2002.
92
Ma and McCool / Low Latency Photon Mapping Using Block Hashing
(b)
Index node
Empty cell in leaf node
Occupied cell in leaf node
(a)
Figure 5: Compaction of photon blocks. (a) B
+
-tree after inserting
all photons. Many leaf nodes have empty cells. (b) All
photon records are compacted in the leaf nodes.
Regardless, observe that each photon block contains a
spatially clustered set of photons disjoint from those contained
in other blocks. This is the main result we are after
; any other data structures that can group photons into
spatially-coherent groups, such as grid files
34
, can be used
in place of the B
+
-tree and space-filling curve.
4.2. Creating the Hash Tables
The hash tables used in BH are based on the LSH scheme described
in Section
3.1
. BH generates L tables in total, serving
as parallel and complementary indices to the photon data.
Each table has three hash functions (since photons are classified
by their 3D positions), and each hash function has P + 1
thresholds.
BH employs an adaptive method that generates the thresholds
based on the photon positions. For each dimension, a
histogram of photon positions is built. Then, the histogram is
integrated to obtain a cumulative distribution function (cdf ).
Lastly, stratified samples are taken from the inverse of the cdf
to obtain threshold locations. The resulting thresholds will
be far apart where there are few photons, and close together
where photons are numerous. Ultimately this method attempts
to have a similar number of photons into each bucket.
Hash tables are stored as a one-dimensional array structure
, shown in Figure
6
. The hash key selects a bucket out of
the P
n
available buckets in the hash table. Each bucket refers
up to B blocks of photons, and has space for a validity flag
per reference, and storage for a priority value. We defer the
discussion on the choice of P, L and B until Section
5
.
B
V V V
V
Priority
Figure 6: Hash table bucket
layout
4.3. Inserting Photon Blocks
In BH, references to entire photon blocks, rather than individual
photons, are inserted into the hash tables. One reason
for doing so is to reduce the memory required per bucket.
Another, more important, reason is that when merging results
from multiple hash tables (Section
3.1
), BH needs
to compare only block addresses instead of photons when
weeding out duplicates as each block contains a unique set of
photons. This means fewer comparisons have to be made and
the individual photons are only accessed once per query, during
post-processing of the merged candidate set to find the
k nearest photons. Consequently, the transfer of each photon
block through the memory system happens at most once per
query. All photons accessed in a block are potentially useful
contributions to the candidate set, since photons within a single
block are spatially coherent. Due to our memory model
assumptions, once we have looked at one photon in a block
it should be relatively inexpensive to look at the rest.
(a)
(b)
(c)
Figure 7: Block hashing illustrated. (a) Each block corresponds
to an interval of the Hilbert curve, which in turn covers
some compact region of the domain. Consequently, each
bucket (b) represents all photons (highlighted with squares)
in each block with at least one photon hashed into it (c).
Each bucket in a hash table corresponds to a rectangular
region in a non-uniform grid as shown in Figure
7
b. Each
block is inserted into the hash tables once for each photon
within that block, using the position of these photons to create
the keys. Each bucket of the hash table refers to not only
the photons that have been hashed into that bucket, but also
all the other photons that belong to the same blocks as the
hashed photons (see Figure
7
c.)
Since each photon block is inserted into each hash table
multiple times, using different photons as keys, a block
may be hashed into multiple buckets in the same hash table
. Of course, a block should not be inserted into a bucket
more than once. More importantly, our technique ensures
that each block is inserted into at least one hash table. Orphaned
blocks are very undesirable since the photons within
will never be considered in the subsequent AkNN evaluation
and will cause a constant error overhead. Hence, our technique
does not navely drop a block that causes a bucket to
overflow.
However, there may be collisions that cause buckets to
overflow, especially when a large bucket load factor is chosen
to give a compact hash table size, or there exists a large
variation in photon density (which, of course, is typical in
this application). Our algorithm uses two techniques to address
this problem. The first technique attempts to insert every
block into every hash table, but in different orders on
different hash tables, such that blocks that appear earlier in
the ordering are not favoured for insertion in all tables. BH
uses a technique similar to disk-striping
38
, illustrated by the
c The Eurographics Association 2002.
93
Ma and McCool / Low Latency Photon Mapping Using Block Hashing
pseudo code in Figure
8
. An example is given in the diagram
in the same figure.
for h from 0 to (number_of_hash_tables-1)
for b from 0 to (number_of_blocks-1)
idx = (h+b) modulo L
insert block[b] into hashtable[idx]
endfor
endfor
0
1
2
0
1
2
1
2
0
1
2
0
1
2
0
1
2
0
1
2
0
1
2
0
3
4
5
3
4
5
3
4
5
Photon Block
Hash Table Bucket
1st iteration
2nd iteration
3rd iteration
Figure 8: Striping insertion strategy
The second technique involves a strategy to deal with
overflow in a bucket. For each photon block, BH keeps the
count of buckets that the block has been hashed into so far.
When a block causes overflow in a bucket, the block in the
bucket that has the maximum count will be bumped if that
count is larger than one, and larger than that of the incoming
block. This way we ensure that all blocks are inserted into
at least one bucket, given adequate hash table sizes, and no
block is hashed into an excessive number of buckets.
4.4. Querying
A query into the BH data structure proceeds by delegating
the query to each of the L hash tables. These parallel accesses
will yield as candidates all photon blocks represented
by buckets that matched the query. The final approximate
nearest neighbour set comes from scanning the unified candidate
set for the nearest neighbours to the query point (see
Figure
9
.) Note that unlike kNN algorithms based on hier-archical
data structures, where candidates for the kNN set
trickle in as the traversal progresses, in BH all candidates are
available once the parallel queries are completed. Therefore,
BH can use algorithms like selection
29
(instead of a priority
queue) when selecting the k nearest photons.
Each query will retrieve one bucket from each of the L
hash tables. If the application can tolerate elevated inaccuracy
in return for increased speed of query (for example, to
pre-visualise a software rendering), it may be worthwhile to
consider using only a subset of the L candidate sets. Block
hashing is equipped with a user-specified accuracy setting:
Let A
IN be an integer multiplier. The block hashing algorithm
will only consider Ak candidate photons in the final
scan to determine the k nearest photons to a query. Obviously
the smaller A is, the fewer photons will be processed
in the final step; as such, query time is significantly reduced,
but with an accuracy cost. Conversely, a higher A will lead
to a more accurate result, but it will take more time. Experimental
results that demonstrate the impact of the choice of
A will be explored in Section
6
.
Query point
Matched point
Data point
(a)
(b)
(c)
Figure 9: Merging the results from multiple hash tables.
(a) the query point retrieves different candidates sets from
different hash tables, (b) the union set of candidates after
merging, and (c) the two closest neighbours selected.
There needs to be a way to select the buckets from which
the Ak candidate photons are obtained. Obviously, we want
to devise a heuristic to pick the "best" candidates. Suppose
every bucket in every hash table has a priority given by
=
|bucket_capacity - #entries - #overflows|
where "#overflows" is the number of insertion attempts after
the bucket became full. The priority can be pre-computed
and stored in each bucket of each hash table during the insertion
phase. The priority of a bucket is smallest when the
bucket is full but never overflowed. Conversely, when the
hash bucket is underutilised or overflow has occurred,
will
be larger. If a bucket is underutilised, it is probably too small
spatially (relative to local sample density). If it has experienced
overflow, it is likely too large spatially, and covers too
many photon block regions.
During each query, BH will sort the L buckets returned
from the hash tables by their priority values, smallest values
of
first. Subsequently, buckets are considered in this order,
one by one, until the required Ak photons are found. In this
way the more "useful" buckets will be considered first.
Choice of Parameter Values
Block Hashing is a scheme that requires several parameters:
B, the bucket capacity; L, the number of hash tables whose
results are merged; and P, the number of thresholds per dimension
. We would like to determine reasonable values for
these parameters as functions of k, the number of nearest
neighbours sought, and N, the number of photons involved.
It is important to realize the implications of these parameters
. The total number of 32-bit pointers to photon blocks
is given by LP
3
B. Along with the number of thresholds 3LP,
this gives the memory overhead required for BH. The upper
bound for this value is 6N, the number of photons multiplied
by the six 32-bit words each photon takes up in our
implementation. If we allow B to be a fixed constant for now,
c The Eurographics Association 2002.
94
Ma and McCool / Low Latency Photon Mapping Using Block Hashing
the constraint LP
3
+ 3LP
N arises from the reasonable assumption
that we do not want to have more references to
blocks than there are photons, or more memory used in the
index than in the data.
Empirically, L = P = ln N has turned out to be a good
choice. The value ln N remains sub-linear as N increases,
and this value gives a satisfactory index memory overhead
ratio: There are a total of B(ln N)
4
block references. Being
four bytes each, the references require 4B(ln N)
4
bytes. With
each hash table, there needs to be 3LP = 3(ln N)
2
thresholds
. Represented by a 4-byte value each, the thresholds take
another 12(ln N)
2
bytes. Next, assuming one photon block
can hold ten photons, N photons requires N/10 blocks; each
block requires 64 words, so the blocks require 25.6N bytes
in total. The total memory required for N photons, each occupying
6 words, is 24N bytes. This gives an overhead ratio
of
(4B(ln N)
4
+ 12(ln N)
2
+ 25.6N
- 24N)/24N.
(1)
The choice of B is also dependent on the value of k specified
by the situation or the user. However, since it is usual
in photon mapping that k is known ahead of time, B can be
set accordingly. B should be set such that the total number of
photons retrieved from the L buckets for each query will be
larger than k. Mathematically speaking, each photon block
in our algorithm has ten photons, hence 10LB
k. In particular
, 10LB > Ak should also be satisfied. Since we choose
L = ln N, rearranging the equation yields: B > Ak/(10 ln N)
For example, assuming A = 16, N = 2000000, k = 50, then
B = 6.
If we substitute B back into Equation
1
, we obtain the final
overhead equation
(4(Ak/10)(ln N)
3
+ 12(ln N)
2
+ 1.6N)/24N.
(2)
Figure
10
plots the number of photons versus the memory
overhead. For the usual range of photon count in a photon
mapping application, we see that the memory overhead,
while relative large for small numbers of photons, becomes
reasonable for larger numbers of photons, and has an asymptote
of about 6%. Of course, if we assumed different block
size (cache line size), these results would vary, but the analysis
is the same.
Memory Overhead
Ratio (%)
5
15
20
25
30
0.50
0.75
1.00
1.25
1.50
1.75
A=16
A=8
A=4
10
0.25
2.00
Number of Photons
Figure 10: Plot of photon count vs. memory overhead in-curred
by BH, assuming k = 50.
Results
For BH to be a useful AkNN algorithm, it must have satisfactory
algorithmic accuracy. Moreover, in the context of
photon mapping, BH must also produce good visual accuracy
. This section will demonstrate that BH satisfies both requirements
, while providing a speed-up in terms of the time
it takes to render an image, even in a software implementation
.
To measure algorithmic accuracy, our renderer was rigged
to use both the kd-tree and BH based photon maps. For each
kNN query the result sets were compared for the following
metrics:
False negatives:
# photons incorrectly excluded from the
kNN set.
Maximum distance dilation:
the ratio of bounding radii
around the neighbours reported by the two algorithms.
Average distance dilation:
the ratio of the average distances
between the query point and each of the nearest neighbours
reported by the two algorithms.
To gauge visual accuracy, we calculate a Caustic RMS
Error
metric, which compares the screen space radiance difference
between the caustic radiance values obtained from
kd-tree and BH.
A timing-related Time Ratio metric is calculated as a ratio
of the time taken for a query into the BH data structure
versus that for the kd-tree data structure. Obviously, as A
increases, the time required for photon mapping using BH
approaches that for a kd-tree based photon mapping.
Our first test scene, shown in Figure
11
, with numerical
results in Figure
12
, consists of a highly specular ring
placed on top of a plane with a Modified Phong
28
reflectance
model. This scene tests the ability of BH to handle a caustic
of varying density, and a caustic that has been cast onto a
non-Lambertian surface.
Figure
13
shows a second scene consisting of the Venus
bust, with a highly specular ring placed around the neck of
Venus. Figure
14
shows the numerical statistics of this test
scene. The ring casts a cardioid caustic onto the (non-planar)
chest of the Venus. This scene demonstrates a caustic on a
highly tessellated curved surface. Global illumination is also
used for both scenes, however the times given are only for
the query time of the caustic maps.
The general trend to notice is that for extremely low accuracy
(A) settings the visual and algorithmic performance of
BH is not very good. The candidate set is simply not large
enough in these cases. However, as A increases, these performance
indicators drop to acceptable levels very quickly,
especially between values of A between 2 and 8. After A = 8
diminishing returns set in, and the increase in accuracy incurs
a significant increase in the cost of the query time required
. This numerical error comparison is parallelled by the
c The Eurographics Association 2002.
95
Ma and McCool / Low Latency Photon Mapping Using Block Hashing
(a) kd-tree
(b) BH, A=4
(c) BH, A=16
(d) BH, A=8
Figure 11: "Ring" image
0
0.2
0.4
0.6
0.8
1
0 2 4 6 8 10 12 14 16 18 20
Accuracy Setting (A)
False-Average
#errors
1
1.1
1.2
1.3
1.4
1.5
1.6
0 2 4 6 8 10 12 14 16 18 20
Accuracy Setting (A)
Max Radius Dilation
Avg Radius Dilation
Dilation Ratio
0
0.005
0.01
0.015
0.02
0 2 4 6 8 10 12 14 16 18 20
Accuracy Setting (A)
RMS error
Radiance RMS Error
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0 2 4 6 8 10 12 14 16 18 20
Accuracy Setting (A)
Timing Ratio
Time Ratio
Figure 12: "Ring" numerical statistics
visual comparison of the images: the image rendered with
A = 8 and A = 16 are virtually indistinguishable. These results
suggest that intermediate values of A, between 8 to 10,
should be used as a good compromise between query speed
and solution accuracy.
It is apparent from the query time ratio plots that there
exists a close-to-linear relationship between values of A and
time required for a query into BH. This is consistent with
the design of the A parameter; it corresponds directly to the
number of photons accessed and processed for each query.
Another important observation that can be made from the
visual comparisons is that images with greater approximation
value A look darker. This is because the density estimate
is based on the inverse square of the radius of the sphere enclosing
the k nearest neighbours. The approximate radius is
always larger than the true radius. This is an inherent problem
with any approximate solution to the kNN problem, and
indeed even with the exact kNN density estimator: as k goes
(a) kd-tree
(b) BH, A=16
(c) BH, A=8
(d) BH, A=4
Figure 13: "Venus with Ring" images
0
0.5
1
1.5
2
2.5
3
3.5
4
0 2 4 6 8 10 12 14 16 18 20
Accuracy Setting (A)
False-Average
#errors
1
1.02
1.04
1.06
1.08
1.1
0 2 4 6 8 10 12 14 16 18 20
Accuracy Setting (A)
Max Radius Dilation
Avg Radius Dilation
Dilation Ratio
0
0.01
0.02
0.03
0.04
0.05
0 2 4 6 8 10 12 14 16 18 20
Accuracy Setting (A)
RMS error
Radiance RMS Error
0
0.2
0.4
0.6
0.8
1
1.2
0 2 4 6 8 10 12 14 16 18 20
Accuracy Setting (A)
Timing Ratio
Time Ratio
Figure 14: "Venus with Ring" numerical statistics
to infinity, the kNN density estimator does converge on the
true density, but always from below.
Hardware Implementation
There are two ways to approach a hardware implementation
of an algorithm in a hardware accelerator: with a custom
hardware implementation, or as an extension or exploitation
of current hardware support. While there would be certain
advantages in a full custom hardware implementation of
BH, this would probably lead to a large chunk of hardware
with low utilisation rates. Although there are many potential
applications of AkNN search beyond photon mapping (we
list several in the conclusions), it seems more reasonable to
consider first if BH can be implemented using current hardware
and programmable shader features, and if not, what the
smallest incremental changes would be. We have concluded
that BH, while not quite implementable on today's graphics
hardware, should be implementable in the near future.
We consider only the lookup phase here, since the preprocessing
would indeed require some custom hardware support
, but support which perhaps could be shared with other
useful features. In the lookup phase, (1) we compute hash
c The Eurographics Association 2002.
96
Ma and McCool / Low Latency Photon Mapping Using Block Hashing
keys, (2) look up buckets in multiple hash tables, (3) merge
and remove duplicates from the list of retrieved blocks and
optionally sorting by priority, (4) retrieve the photon records
stored in these blocks, and (5) process the photons. Steps
(1) and (5) could be performed with current shader capabilities
, although the ability to loop would be useful for the last
step to avoid replicating the code to process each photon.
Computing the hash function amounts to doing a number of
comparisons, then adding up the zero-one results. This can
be done in linear time with a relatively small number of instructions
using the proposed DX9 fragment shader instruction
set. If conditional assignment and array lookups into the
register file are supported, this could be done in logarithmic
time using binary search.
Steps (2) and (4) amount to table lookups and can be implemented
as nearest-neighbour texture-mapping operations
with suitable encoding of the data. For instance, the hash
tables might be supported with one texture map giving the
priority and number of valid entries in the bucket, while another
texture map or set of texture maps might give the block
references, represented by texture coordinates pointing into
another set of texture maps holding the photon records.
Step (3) is difficult to do efficiently without true conditionals
and conditional looping. Sorting is not the problem,
as it could be done with conditional assignment. The problem
is that removal of a duplicate block reduces the number
of blocks in the candidate set. We would like in this case to
avoid making redundant photon lookups and executing the
instructions to process them. Without true conditionals, an
inefficient work-around is to make these redundant texture
accesses and process the redundant photons anyhow, but discard
their contribution by multiplying them by zero.
We have not yet attempted an implementation of BH on an
actual accelerator. Without looping, current accelerators do
not permit nearly enough instructions in shaders to process
k photons for adequate density estimation. However, we feel
that it might be feasible to implement our algorithm on a
next-generation shader unit using the "multiply redundant
photons by zero" approach, if looping (a constant number of
times) were supported at the fragment level.
We expect that the generation that follows DX9-class accelerators
will probably have true conditional execution and
looping, in which case implementation of BH will be both
straightforward and efficient, and will not require any additional
hardware or special shader instructions. It will also
only require two stages of conditional texture lookup, and
lookups in each stage can be performed in parallel. In comparison
, it would be nearly impossible to implement a tree-based
search algorithm on said hardware due to the lack of a
stack and the large number of dependent lookups that would
be required. With a sufficiently powerful shading unit, of
course, we could implement any algorithm we wanted, but
BH makes fewer demands than a tree-based algorithm does.
Conclusion and Future Work
We have presented an efficient, scalable, coherent and
highly parallelisable AkNN scheme suitable for the high-performance
implementation of photon mapping.
The coherent memory access patterns of BH lead to im-proved
performance even for a software implementation.
However, in the near future we plan to implement the lookup
phase of BH on an accelerator. Accelerator capabilities are
not quite to the point where they can support this algorithm,
but they are very close. What is missing is some form of
true run-time conditional execution and looping, as well as
greater capacity in terms of numbers of instructions. However
, unlike tree-based algorithms, block hashing requires
only bounded execution time and memory.
An accelerator-based implementation would be most interesting
if it is done in a way that permits other applications
to make use of the fast AkNN capability it would provide.
AkNN has many potential applications in graphics beyond
photon maps. For rendering, it could also be used for sparse
data interpolation (with many applications: visualisation of
sparse volume data, BRDF and irradiance volume representation
, and other sampled functions), sparse and multi-resolution
textures, procedural texture generation (specifically
, Worley's texture
47
functions), direct ray-tracing of
point-based objects
40
, and gap-filling in forward projection
point-based rendering
48
. AkNN could also potentially be
used for non-rendering purposes: collision detection, surface
reconstruction, and physical simulation (for interacting
particle systems). Unlike the case with tree-based algorithms
, we feel that it will be feasible to implement BH as
a shader subroutine in the near future, which may make it a
key component in many potential applications of advanced
programmable graphics accelerators.
For a more detailed description of the block hashing algorithm
, please refer to the author's technical report
32
.
Acknowledgements
This research was funded by grants from the National
Science and Engineering Research Council of Canada
(NSERC), the Centre for Information Technology of Ontario
(CITO), the Canadian Foundation for Innovation (CFI), the
Ontario Innovation Trust (OIT), and the Bell University Labs
initiative.
References
1.
P. K. Agarwal. Range Searching. In J. E. Goodman and
J. O'Rourke, editors, Handbook of Discrete and Computational
Geometry. CRC Press, July 1997.
2
2.
P. K. Agarwal and J. Erickson. Geometric range searching
and its relatives. Advances in Discrete and Computational
Geometry, 23:156, 1999.
2
c The Eurographics Association 2002.
97
Ma and McCool / Low Latency Photon Mapping Using Block Hashing
3.
S. Arya and D. M. Mount. Approximate Nearest Neighbor
Queries in Fixed Dimensions. In Proc. ACM-SIAM
SODA, 1993.
2
4.
S. Arya, D. M. Mount, N. S. Netanyahu, R. Silverman,
and A. Y. Wu. An Optimal Algorithm for Approximate
Nearest Neighbor Searching. In Proc. ACM-SIAM
SODA, pages 573582, 1994.
2
5.
J. L. Bentley.
Multidimensional binary search trees
used for associative searching. Communications of the
ACM, 18(9), September 1975.
1
,
2
6.
J. L. Bentley, B. W. Weide, and A. C. Chow. Optimal
Expected-Time Algorithms for Closest Point Problems.
ACM TOMS, 6(4), December 1980.
1
,
2
7.
Per H. Christensen. Faster Photon Map Global Illumination
. Journal of Graphics Tools, 4(3):110, 1999.
2
8.
D. Comer. The Ubiquitous B-Tree. ACM Computing
Surveys, 11(2):121137, June 1979.
4
9.
C. A. Duncan, M. T. Goodrich, and S. G. Kobourov.
Balanced aspect ratio trees: Combining the advantages
of k-d trees and octrees. In Proc. ACM-SIAM SODA,
volume 10, pages 300309, 1999.
2
10. H. Edelsbrunner. Algorithms in Combinatorial Geometry
. Springer-Verlag, 1987.
2
11. D. Eppstein, M. S. Paterson, and F. F. Yao. On nearest
neighbor graphs. Discrete & Computational Geometry,
17(3):263282, April 1997.
2
12. C. Faloutsos and Y. Rong.
DOT: A Spatial Access
Method Using Fractals. In Proc. 7th Int. Conf. on Data
Engineering,, pages 152159, Kobe, Japan, 1991.
4
13. C. Faloutsos and S. Roseman. Fractals for Secondary
Key Retrieval. In Proc. 8th ACM PODS, pages 247
252, Philadelphia, PA, 1989.
4
14. J. H. Freidman, J. L. Bentley, and R. A. Finkel. An Algorithm
for Finding Best Matches in Logarithmic Expected
Time. ACM TOMS, 3(3):209226, 1977.
2
15. V. Gaede and O. Gnther.
Multidimensional access
methods.
ACM Computing Surveys (CSUR),
30(2):170231, 1998.
2
16. A. Gionis, P. Indyk, and R. Motwani. Similarity Search
in High Dimensions via Hashing. In Proc. VLDB, pages
518529, 1999.
2
,
3
17. J. E. Goodman and J. O'Rourke, editors. Handbook
of Discrete and Computational Geometry. CRC Press,
July 1997. ISBN: 0849385245.
2
18. G. Greger, P. Shirley, P. Hubbard, and D. Greenberg.
The irradiance volume.
IEEE CG&A, 18(2):3243,
1998.
1
19. V. Havran. Analysis of Cache Sensitive Representation
for Binary Space Partitioning Trees. Informatica,
23(3):203210, May 2000.
2
20. P. Indyk and R. Motwani. Approximate Nearest Neighbors
: Towards Removing the Curse of Dimensionality.
In Proc. ACM STOC, pages 604613, 1998.
2
21. P. Indyk, R. Motwani, P. Raghavan, and S. Vem-pala
. Locality-Preserving Hashing in Multidimensional
Spaces. In Proc. ACM STOC, pages 618625, 1997.
2
22. H. V. Jagadish. Linear clustering of objects with multiple
attributes. In Proc. Acm-sigmod, pages 332342,
May 1990.
4
23. J. W. Jaromczyk and G. T. Toussaint. Relative Neighborhood
Graphs and Their Relatives.
Proc. IEEE,
80(9):15021517, September 1992.
2
24. H. W. Jensen. Rendering Caustics on Non-Lambertian
Surfaces.
Computer Graphics Forum, 16(1):5764,
1997. ISSN 0167-7055.
1
25. H. W. Jensen. Realistic Image Synthesis Using Photon
Mapping. A.K. Peters, 2001.
1
,
2
,
4
26. H. W. Jensen, F. Suykens, and P. H. Christensen. A
Practical Guide to Global Illumination using Photon
Mapping. In SIGGRAPH 2001 Course Notes, number
38. ACM, August 2001.
2
27. J. K. P. Kuan and P. H. Lewis. Fast k Nearest Neighbour
Search for R-tree Family. In Proc. of First Int.
Conf. on Information, Communication and Signal Processing
, pages 924928, Singapore, 1997.
2
28. E. P. Lafortune and Y. D. Willems. Using the Modified
Phong BRDF for Physically Based Rendering. Technical
Report CW197, Department of Computer Science,
K.U.Leuven, November 1994.
7
29. C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction
to Algorithms. MIT Press, 2001.
6
30. K.-I. Lin and C. Yang. The ANN-Tree: An Index for
Efficient Approximate Nearest-Neighbour Search. In
Conf. on Database Systems for Advanced Applications,
2001.
2
31. N. Linial and O. Sasson. Non-Expansive Hashing. In
Proc. acm stoc, pages 509518, 1996.
2
32. V. Ma.
Low Latency Photon Mapping using Block
Hashing.
Technical Report CS-2002-15, School of
Computer Science, University of Waterloo, 2002.
9
33. J. McNames.
A Fast Nearest-Neighbor Algorithm
Based on a Principal Axis Search Tree. IEEE Transactions
on Pattern Analysis and Machine Intelligence,
23(9):964976, 2001.
2
34. J. Nievergelt, H. Hinterberger, and K. C. Sevcik. The
Grid File: an adaptable, symmetric multikey file structure
. ACM TODS, 9(1):3871, March 1984.
3
,
5
35. T. J. Purcell, I. Buck, W. R. Mark, and P. Hanrahan.
Ray Tracing on Programmable Graphics Hardware. In
to appear in Proc. SIGGRAPH, 2002.
1
36. J. T. Robinson. The K-D-B-tree: A Search Structure
for Large Multidimensional Dynamic Indexes. In Proc.
acm sigmod, pages 1018, 1981.
2
37. Arnold L. Rosenberg and Lawrence Snyder. Time- and
space-optimality in b-trees. ACM TODS, 6(1):174193,
1981.
4
38. K. Salem and H. Garcia-Molina. Disk striping. In IEEE
ICDE, pages 336342, 1986.
5
c The Eurographics Association 2002.
98
Ma and McCool / Low Latency Photon Mapping Using Block Hashing
39. N. Sample, M. Haines, M. Arnold, and T. Purcell. Optimizing
Search Strategies in kd-Trees. May 2001.
2
40. G. Schaufler and H. W. Jensen.
Ray Tracing Point
Sampled Geometry. Rendering Techniques 2000, pages
319328, June 2000.
9
41. J. Schmittler, I. Wald, and P. Slusallek. SaarCOR - A
Hardware Architecture for Ray Tracing. In to appear
at EUROGRAPHICS Graphics Hardware, 2002.
1
42. M. Smid.
Closest-Point Problems in Computational
Geometry. In J. R. Sack and J. Urrutia, editors, Handbook
on Computational Geometry. Elsevier Science,
Amsterdam, North Holland, 2000.
2
43. P. Tsaparas. Nearest neighbor search in multidimen-sional
spaces.
Qualifying Depth Oral Report 319-02
, Dept. of Computer Science, University of Toronto,
1999.
2
44. M. Vanco, G. Brunnett, and T. Schreiber. A Hashing
Strategy for Efficient k-Nearest Neighbors Computation
. In Computer Graphics International, pages 120
128. IEEE, June 1999.
2
45. I. Wald, T. Kollig, C. Benthin, A. Keller, and
P. Slusallek. Interactive global illumination. Technical
report, Computer Graphics Group, Saarland University,
2002. to be published at EUROGRAPHICS Workshop
on Rendering 2002.
2
46. G. Ward. Real Pixels. In James Arvo, editor, Graphics
Gems II, pages 8083. Academic Press, 1991.
4
47. Steven Worley. A cellular texture basis function. In
Proc. SIGGRAPH 1996, pages 291294. ACM Press,
1996.
9
48. M. Zwicker, H. Pfister, J. van Baar, and M. Gross. Surface
Splatting. Proc. SIGGRAPH 2001, pages 371378,
2001.
9
c The Eurographics Association 2002.
99
Ma and McCool / Low Latency Photon Mapping Using Block Hashing
(a) kd-tree
(b) BH, A=16
(c) BH, A=8
(d) BH, A=4
Figure 13: "Ring"
(a) kd-tree
(b) BH, A=16
(c) BH, A=8
(d) BH, A=4
Figure 14: "Venus with Ring"
c The Eurographics Association 2002.
158 | photon mapping;block hashing (BH);hashing techniques;AkNN;kNN;accelerator |
131 | Lower Bounds & Competitive Algorithms for Online Scheduling of Unit-Size Tasks to Related Machines | In this paper we study the problem of assigning unit-size tasks to related machines when only limited online information is provided to each task. This is a general framework whose special cases are the classical multiple-choice games for the assignment of unit-size tasks to identical machines. The latter case was the subject of intensive research for the last decade. The problem is intriguing in the sense that the natural extensions of the greedy oblivious schedulers, which are known to achieve near-optimal performance in the case of identical machines, are proved to perform quite poorly in the case of the related machines. In this work we present a rather surprising lower bound stating that any oblivious scheduler that assigns an arbitrary number of tasks to n related machines would need log n polls of machine loads per task, in order to achieve a constant competitive ratio versus the optimum offline assignment of the same input sequence to these machines . On the other hand, we prove that the missing information for an oblivious scheduler to perform almost optimally , is the amount of tasks to be inserted into the system. In particular, we provide an oblivious scheduler that only uses O(loglog n) polls, along with the additional information of the size of the input sequence, in order to achieve a constant competitive ratio vs. the optimum offline assignment . The philosophy of this scheduler is based on an interesting exploitation of the slowfit concept ([1, 5, 3]; for a survey see [6, 9, 16]) for the assignment of the tasks to the related machines despite the restrictions on the provided online information, in combination with a layered induction argument for bounding the tails of the number of tasks passing from slower to faster machines. We finally use this oblivious scheduler as the core of an adaptive scheduler that does not demand the knowledge of the input sequence and yet achieves almost the same performance. | INTRODUCTION
The problem of the Online Assignment of Tasks to Related
Machines is defined as follows: there are
n machines
possibly of different speeds, that are determined by a speed
vector c
, and an input sequence
of m independent tasks to be assigned to these machines.
The tasks arrive sequentially, along with their associated
weights (positive reals) and have to be assigned immediately
and uniquely to the machines of the system. The size
of the input sequence as well as the weights of the tasks
are determined by an oblivious adversary (denoted here by
ADV). Each task has to be assigned upon its arrival to one
of the machines, using the following information:
(possibly a portion of) the online information of current
status of the system,
the offline information of the machine speeds, and
its own weight.
The tasks are considered to be of infinite duration (permanent
tasks) and no preemption is allowed. The cost of
an online scheduler
ALG for the assignment of an input sequence
of tasks (denoted by ALG()) is the maximum load
eventually appearing in the system. In case that a random-ized
scheduler is taken into account, then the cost of the
scheduler is the expectation of the corresponding random
variable. The quality of an online scheduler is compared vs.
the optimum offline assignment of the same input sequence
to the n machines. We denote the optimum offline cost for
by ADV(). That is, we consider the competitive ratio
(or performance guarantee) to be the quality measure, (eg,
see [6]):
Definition
1.1. An online scheduler
ALG is said to achieve
a competitive ratio of parameters (a, ), if for any
124
input sequence the relation connecting its own cost ALG()
and the optimum offline cost of
ADV, are related by
ALG() a ADV() + .
ALG is strictly a-competitive if
, ALG() a ADV().
In this work we study the consequences of providing only
some portion of the online information to a scheduler. That
is, we focus our interest on the case where each task is capable
of checking the online status only by a (small wrt n)
number d of polls from the n machines. In this case, the
objective is to determine the trade-off between the number
of polls that are available to each of the tasks and the performance
guarantee of the online scheduler, or equivalently,
to determine the minimum number of polls per task so that
a strictly constant competitive ratio is achieved.
Additionally, we consider the case of unit-size tasks that
are assigned to related machines. Thus, each task t [m]
has to be assigned to a machine host(t) [n] using the following
information that is provided to it: the current loads
of d suitably chosen machines (the kind of the "suitable"
selection is one of the basic elements of a scheduler and will
be called the polling strategy from now on) and an assignment
strategy that determines the host of t among
these d selected candidates on behalf of t.
In what follows we shall consider homogeneous schedulers
, ie, schedulers that apply exactly the same protocol on
all the tasks that are inserted into the system. This choice
is justified by the fact that no task is allowed to have access
to knowledge concerning previous or forthcoming choices of
other tasks in the system, except only for the current loads
of those machines that have been chosen to be its candidate
hosts. Additionally, we shall use the terms (capacitated)
bins instead of (related) machines and (identical) balls
instead of (unit-size) tasks interchangeably, due to the profound
analogy of the problem under consideration with the
corresponding Balls & Bins problem.
1.1
Polling Strategies
The way a scheduler
ALG lets each newly inserted task
choose its d candidate hosts is called a polling strategy
(PS). We call the strategies that poll candidate machines
homogeneously for all the inserted tasks of the same size,
homogeneous polling strategies (HPS). In the present
work we consider the tasks to be indistinguishable: Each
task upon its arrival knows only the loads of the machines
that it polls, along with the speed (or equiv. capacity wrt
bins) vector c of the system.
This is why we focus our
interest in schedulers belonging to HPS. Depending on the
dependencies of the polls that are taken on behalf of a task,
we classify the polling strategies as follows:
Oblivious polling strategies(HOPS)
In this case we consider that the polling strategy on behalf
of a newly inserted task t consists of an independent (from
other tasks) choice of a d-tuple from [n]
d
according to a fixed
probability distribution f : [n]
d
[0, 1]. This probability
distribution may only depend on the common offline information
provided to each of the tasks. It should be clear
that any kind of d independent polls (with or without replacement
) on behalf of each task, falls into this family of
polling strategies. Thus the whole polling strategy is the sequence
of m d-tuples chosen independently (using the same
probability distribution f ) on behalf of the m tasks that
are to be inserted into the system. Clearly for any polling
strategy belonging to HOPS, the d random polls on behalf
of each of the m tasks could have been fixed prior to the
start of the assignments.
Adaptive polling strategies (HAPS)
In this case the i
th
poll on behalf of ball t [m] is allowed
to exploit the information gained by the i - 1 previous polls
on behalf of the same ball. That is, unlike the case of HOPS
where the choice of d candidates of a task was oblivious to
the current system state, now the polling strategy is allowed
to direct the next poll to specific machines of the system
according to the outcome of the polls up to this point. In
this case all the polls on behalf of each task have to be taken
at runtime, upon the arrival of the task.
Remark:
It is commented that this kind of polling strategies
are not actually helpful in the case of identical machines,
where HOPS schedulers achieve asymptotically optimal performance
(see [18]). Nevertheless, we prove here that this
is not the case for the related machines. It will be shown
that oblivious strategies perform rather poorly in this setting
, while HAPS schedulers achieve actually asymptotically
optimal performance.
1.2
Assignment Strategies
Having chosen the d-size set of candidate hosts for task
t [m], the next thing is to assign this task to one of these
machines given their current loads and possibly exploiting
the knowledge on the way that they were selected. We call
this procedure the assignment strategy. The significant
question that arises here is the following: Given the polling
strategy adopted and the knowledge that is acquired at runtime
by the polled d-tuple on behalf of a task t [m], which
would be the optimal assignment strategy for this task so that
the eventual maximum load in the system be minimized?
In the Unit Size Tasks-Identical Machines case, when
each of the d polls is chosen iur (with replacement) from [n],
Azar et al. ([2]) show that the best assignment strategy
is the minimum load rule and requires
O(log n) polls per
task for a strictly constant competitive ratio. Consequently
V
ocking ([18]) has suggested the always go left strategy
, which (in combination with a properly chosen oblivious
polling strategy) only requires a total number of
O(loglog n)
in order to achieve a constant competitive ratio. In the same
work it was also shown that one should not expect much by
exploiting possible dependencies of the polls in the case of
unit-size tasks that are placed into identical machines, since
the load of the fullest machine is roughly the same as the one
achieved in the case of non-uniform but independent polls
using the always go left rule.
Nevertheless, things are quite different in the Related
Machines case: we show by our lower bound (section 3)
that even if a scheduler
ALG considers any oblivious polling
strategy and the best possible assignment strategy,
ALG
has a strict competitive ratio of at least
2d
n
4
d-2
, where d is
the number of oblivious polls per task. This implies that
in the case of the related machines there is still much space
for the adaptive polling strategies until the lower bound of
(loglog n) polls per task is matched.
125
1.3
Related Work
In the case of assigning unit-size tasks to identical machines
, there has been a lot of intensive research during the
last decade. If each task is capable of viewing the whole
status of the system upon its arrival (we call this the Full
Information case), then Graham's greedy algorithm assures
a competitive ratio that asymptotically tends to 2
1
n
([6]).
Nevertheless, when the tasks are granted only a limited number
of polls, things are much more complicated: In the case
of unit-size tasks and a single poll per task, the result of
Gonnet [10] has proved that for m n the maximum load
is (1 + o(1))
ln
n
lnln
n
when the poll of each task is chosen iur
from the n machines, whp.
1
In [15] an explicit form for the
expected maximum load is given for all combinations of n
and m. From this work it easily seen that for m n ln n, the
maximum load is
m
n
+ (
m ln n/n), which implies that by
means of competitive ratio, m n is actually the hardest
instance.
In the case of d 2 polls per task, a bunch of new techniques
had to be applied for the analysis of such schedulers.
The main tools used in the literature for this problem have
been the layered induction, the witness tree argument and
the method of fluid models (a comprehensive presentation
of these techniques may be found in the very good survey
of Mitzenmacher et al. [14]). In the seminal paper of Azar
Broder Karlin and Upfal [2] it was proved that the proposed
scheduler abku that chooses each of the polls of a task iur
from [n] and then assigns the task to the candidate machine
of minimum load, achieves a maximum load that is at
most
m
n
+
lnln
n
ln
d
(1). This implies a strictly O
lnln
n
ln
d
competitive
ratio, or equivalently, at least
O(ln n) polls per
task would be necessary in order to achieve a strictly constant
competitive ratio. In [18] the always go left algorithm
was proposed, which assures a maximum load of
at most
m
n
+
lnln
n
d ln 2
(1) and thus only needs an amount
of
O(loglog n) polls per task in order to achieve a strictly
constant competitive ratio. In addition it was shown that
this is the best possible that one may hope for in the case
of assigning unit-size tasks to identical machines with only
d (either oblivious or adaptive) polls per task.
The Online Assignment of Tasks to Related Machines
problem has been thoroughly studied during the past years
for the Full Information case (eg, see chapter 12 in [6]). In
particular, it has been shown that a strictly (small) constant
competitive ratio can be achieved using the slowfit-like algorithms
that are based on the idea of exploiting the least
favourable machines (this idea first appeared in [17]). The
case of Limited Information has attracted little attention
up to this point: some recent works ([12, 13, 8]) study the
case of each task having a single poll, for its assignment
to one of the (possibly related) machines when the probability
distributions of the tasks comprise a Nash Equilibrium
. For example, in [8] it was shown that in the Related
Machines case a coordination ratio (ie, the ratio between the
worst possible expected maximum load among Nash Equilibria
over the offline optimum) of
O
log
n
logloglog
n
. However,
when all the task weights are equal then it was shown by
Mavronicolas and Spirakis [13] that the coordination ratio
1
A probabilistic event A is said to hold with high probability
(whp) if for some arbitrarily chosen constant > 0,
IP[A] 1 - n
.
is
O
log
n
loglog
n
. As for the case of d > 1 in the Related
Machines problem, up to the author's knowledge this is the
first time that this problem is dealt with.
1.4
New results
In this work we show that any HOPS scheduler requires at
least
O
log
n
loglog
n
polls in order to achieve a strictly constant
competitive ratio vs. an oblivious adversary. The key point
in this lower bound argument is the construction of a system
of d+1 groups of machines running at the same speed within
each group, while the machine speeds (comparing machines
of consecutive groups) fall by a fixed factor and on the other
hand the cumulative capacities of the groups are raised by
the same factor. Then it is intuitively clear that any HOPS
scheduler cannot keep track of the current status within each
of these d + 1 groups while having only d polls per new task,
and thus it will have to pay the cost of misplacing balls in
at least one of these groups. More specifically, we show the
following lower bound:
Theorem
1.
d 1, the competitive ratio of any
d-hops scheduler is at least
2d
n
4
d-2
.
Then we propose a new d-hops scheduler OBL
which,
if it is fortified with the additional knowledge of the total
number of tasks to be inserted, then it achieves the following
upper-bound:
Theorem 2.
Let lnln n - lnlnln n > d 2 and suppose
that the size of the input sequence is given as offline
information.If
OBL
provides each task with (at most) 2d
polls, then it has a strict competitive ratio that drops double-exponentially
with d.In particular the cost of OBL
is with
high probability at most
OBL
(m) (1 + o(1))8
n
d2
d+3
1
/(2
d+1
-1)
+ 1
ADV(m)
It is commented that all the schedulers for the Identical
Machines-Limited Information case up to now used minimum
load
as the profound assignment rule. On the other
hand,
OBL
was inspired by the slowfit approaches for
the Related Machines-Full Information problem and the
fact that a greedy scheduler behaves badly in that case.
Up to the author's knowledge, the idea of using the slow-est
machine possible first appeared in [17]. Additionally, a
layered induction argument is employed for bounding the
amount of tasks that flow from the slower to the faster machines
in the system.
2
This then allows the use of relatively
simple arguments for bounding the maximum load
of the tasks that end up in a small fraction of the system
that consists of the fastest machines. Clearly this upper
bound is near-optimal (up to a multiplicative constant),
since it matches the (loglog n) lower bound of the Unit
Size Tasks-Identical Machines problem ([18]) which is a
subcase of our setting.
Finally we propose a haps scheduler (
ADAPT ) that combines
the previous hops scheduler with a classical guessing
2
Note that this does not imply preemption of tasks which is
not allowed in our setting, but rather the event that a task
hits slower machines that are already overloaded and thus
has to assign itself to a faster machine.
126
argument for the cost of
ADV and assures a cost roughly 5
times the cost of
OBL
:
Theorem 3.
For any input sequence of identical tasks
that have to be assigned to n related machines using at most
2d + 1 polls per task, the cost of ADAPT is (whp),
ADAPT () < O
n
d2
d+3
1
/(2
d+1
-1)
+ 1
ADV()
A SIMPLE LOWER BOUND ON HOMOGENEOUS SINGLE-POLL GAMES
This section contains a simple argument for the claimed
lower bound on online schedulers that devote a single poll
per new task, ie, d = 1. Clearly by their nature these are
HOPS schedulers, since there is no actual option for the
assignment strategy. The proof for the lower bound of these
schedulers is rather simple but it will and shed some light to
the essence of the construction for the proof of the general
lower bound that will follow in the next section.
Let's assume that there exists a HOPS scheduler that only
uses 1 poll per task and claims to be strict a-competitive
against any oblivious adversary
ADV. Initially ADV chooses
an arbitrary real number r n which will be fixed in the
end so as to maximize the lower bound on a. Let also the
variables C
total
, C
max
denote the total capacity and the maximum
possible polled capacity using one poll (ie, the maximum
bin capacity in this case) in the system. Consequently
ADV uses the following system of capacitated bins so that
these values are preserved:
C
1
=
C
max
,
C
i
=
C
max
r , i = 2, . . . ,
C
total
- C
max
C
max
r
+ 1 (
n)
Observe that the capacity of bin i 2 is r times smaller
than
C
max
, while on the other hand, the cumulative capacity
of the last n - 1 machines is
n-1
r
times larger than the
capacity of the largest bin in the system. Consider also the
following abbreviations of probabilities and events that may
occur upon the arrival of a new ball:
E
i
"bin i is hit by a ball"
P
1
IP[E
1
],
P
1
1 - P
1
Obviously due to the assumption of a-competitiveness,
a C
total
C
max
= C
max
+
(
n-1)C
max
r
C
max
= 1 + n - 1
r
since
ALG could choose to assign all the incoming balls to
the largest bin in the system. The question that arises is
whether there exists a 1-poll scheduler that can do better
than that. We consider the following input sequences:
|| = 1, w
1
= w:
ALG() = IE[L
max
()]
P
1
w
C
max
+
P
1
w
Cmax
r
ADV() =
w
C
max
/ a-comp. /
=
a P
1
+ r
P
1
= r - P
1
(r - 1)
P
1
r-a
r-1
|| = , t 1, w
t
= w: In this case the loads of all the
bins will tend to their expected values, and thus
||
(1) = IE[
||
(1)] =
P
1
||w
C
max
ALG() = IE[L
max
()]
P
1
||w
C
max
ADV() =
||w
C
total
/ a-comp. /
=
a
||w
C
total
P
1
||w
C
max
P
1
aC
max
C
total
=
a
n-1
r
+1
Combining the two bounds on P
1
we get:
aC
max
C
total
=
a
n-1
r
+ 1
P
1
r - a
r - 1
a r - a n - 1 + r - a
n - 1
r
+ 1
a r + n - 1
r
r + n - 1
a
r + n - 1
r +
n-1
r
= r
2
+ n r - r
r
2
+ n - 1
which is maximized for r = n + 1 and assures a lower
bound on a of
n
2
.
Remark:
It is worth noting that the lower bound com-pletely
depends on the number of bins in the system, and on
the ratio r =
C
max
C
min
and does not depend at all on the total
capacity of the system, C
total
.
THE LOWER BOUND ON MULTI-HOPS SHCEDULERS
In this section we study the behaviour of homogeneous
schedulers that adopt an oblivious polling strategy (ie, the
polling strategy is from HOPS) and an arbitrary assignment
strategy. We call these d-hops schedulers, since the choice of
the d candidates on behalf of each ball is done independently
for each ball, according to a common probability distribution
f : [n]
d
[0, 1]. Recall that the choice of the candidate
bins for each ball is oblivious to the current system state
and thus could have been fixed prior to the beginning of the
assignments.
Theorem
1.
d 1, the competitive ratio of any d-hops
scheduler is at least
2d
n
4
d-2
.
Proof:
Let f : [n]
d
[0, 1] be the adopted Oblivious
polling strategy by an arbitrary d-hops scheduler, ALG. Assume
also that
ALG uses the best possible assignment strategy
given this polling strategy, that is, each ball chooses its
own candidate bins according to f and then it may assign
itself to an arbitrarily chosen host among its candidates, depending
on the current loads of the candidate bins. Assume
also that
ALG claims a (strict) competitive ratio a against
oblivious adversaries.
As parameters of the problem we consider again the quantities
C
total
=
n
i=1
C
i
and
C
max
: the total capacity of a
system of n related machines and the maximum capacity
that may be returned by a single poll. We shall describe
an adversary
ADV that initially chooses an arbitrary real
number 1
r n and then considers the system of (d + 1
groups of) n capacitated bins that is described in Table 1.
Observe that this construction preserves the following two
invariants when considering two successive groups of bins
F
, F
+1
: 1
d:
127
Group of Bins
Number of Bins in group
Capacity per Bin
Cumulative Group Capacity
F
1
1
C
max
C
max
F
2
r(r - 1)
C
max
/r
(r - 1)C
max
F
3
r
3
(r - 1)
C
max
/r
2
r(r - 1)C
max
F
4
r
5
(r - 1)
C
max
/r
3
r
2
(r - 1)C
max
.
.
.
.
.
.
.
.
.
.
.
.
F
d
r
2
d-3
(r - 1)
C
max
/r
d-1
r
d-2
(r - 1)C
max
F
d+1
n - 1 r
r+1
(r
2
d-2
- 1) n - r
2
d-2
C
max
/r
d
(
n
r
d
- r
d-2
)
C
max
Table 1: The system of (d + 1 groups of ) capacitated bins considered by ADV for the proof of the Lower
Bound on d-HOPS schedulers.
(I1) when going from one group to its successor, the bin
capacities decrease by a factor of r, and
(I2) the cumulative capacity of the first + 1 groups is
larger than the cumulative capacity of the first groups
by a factor of r.
We shall denote by C[F ] the cumulative capacity of any
group of bins F [n].
Remark:
The preservation of invariant (I2) when = d
implies that C[F
d+1
]
r
d-1
(r-1)C
max
n
r
d
- r
d-2
C
max
r
d-1
(r - 1)C
max
n r
2
d
- r
2
d-1
+ r
2
d-2
.
We fortify
ALG by allowing a perfect balance of the bins
of a group F
whenever at least one poll on behalf of a new
ball goes to a bin of this group. This is actually in order
to capture the notion of the "perfect assignment strategy
given the polling strategy" claim stated above. Clearly this
does not cause any problem since we are looking for a lower
bound. Because
ALG could lock its d choices to the first d
groups of the system, it is obvious that its competitive ratio
a is at most a
C
total
C[
d
=1
F
]
=
n
r
2d-1
+ 1.
Consider now the d events E
"F
is hit by a ball" (1
d), while P
IP[E
] (call it the hitting probability
of group F
) is the probability of at least one bin from
F
being hit by a ball.
We shall charge
ALG according
to the hitting probabilities that its polling strategy determines
. Notice that these are fixed at the beginning of the
assignments since the polling strategy of
ALG is an oblivious
strategy. Furthermore, the following conditional hitting
probabilities are also determined uniquely by the polling
strategy of
ALG: i, j [n] : i > j,
P
i|j
IP[E
i
|E
1
E
2
E
j
],
Q
i|j
IP[E
i
|E
1
E
2
E
j
].
Finally, let B
() ( = 1, . . . , d) denote the maximum number
of balls that may be hosted by bins of the set
=1
F
without violating the assumption of a-competitiveness of
ALG, when the input sequence of tasks is chosen by ADV.
The following lemma states an inherent property of any d-hops
scheduler:
Lemma
3.1. For any > 1, unless ALG admits a competitive
ratio a >
(
-1)(1-r
-2
)
d
2
r
, the following property holds:
1 d, P
|-1
1
- 1
a
r
Proof:
We prove this lemma by considering the following
input sequences of balls of the same (arbitrarily chosen) size
w:
|| = 1: In this case we know that ADV() =
w
C
max
. The
cost (ie, the expectation of maximum load) of
ALG is:
ALG() P
1
w
C
max
+
(1
- P
1
) Q
2
|1
rw
C
max
+ (1
- Q
2
|1
)Q
3
|2
r
2
w
C
max
+
+
+ (1 - Q
2
|1
)
(1 - Q
d|d-1
) r
d
w
C
max
Due to the demand for a-competitiveness of ALG against
ADV, this then implies
a
r
1 - P
1
P
1
1 a
r
.
|| = r
2-2
, = 2, , d: In this case ADV will use
only the bins of
=1
F
and thus he will pay a cost of
ADV() =
r
2-2
w
r
-1
C
max
=
r
-1
w
C
max
. As for the cost of ALG, we
shall only charge it for the input subsequence of balls that
definitely hit groups F
1
, . . . , F
-1
(call it ^
). Our purpose
is from this sequence of tasks to determine P
|-1
, ie, the
conditional hitting probability of group F
given that all the
previous groups are hit by a ball. Clearly,
IE[
|^|] = ||
-1
=1
P
|-1
= r
2
-2
-1
=1
P
|-1
(where for symmetry of representation we let P
1
|0
= P
1
).
Recall that B
-1
() denotes the maximum number of balls
that may be assigned to the bins of the first - 1 groups,
given the claimed competitive ratio a by ALG and the input
sequence . Then we have:
wB
-1
(
)
C[
-1
=1
F
]
a ADV()
B
-1
() a r
2
-3
. Thus, there is a subsequence ~
of ^
that consists of those tasks which cannot be assigned to the
bins of the first - 1 groups due to the a-competitiveness
constraint. All these tasks have to exploit their remaining
(at most) d - + 1 polls among the bins of [n] \
-1
=1
F
.
It is clear that
ALG has no reason to spoil more than one
poll per group due to the optimal assignment strategy that
it adopts. Thus we can safely assume that there remain
exactly d - + 1 polls for the remaining groups. Obviously
IE[
|~|] IE[|^|] - B
-1
() r
2
-2
-1
=1
P
|-1
- ar
2
-3
r
2
-3
(r
-1
- a),
where for simplification of notation we use the bounding se-128
quence
-1
=
-2
1
-1
a
r
-1
= 1
-1
a
r
-1
and
0
= 1. This is true because P
1
1 a
r
1
= 1
-1
a
r
, > 1, while we assume inductively that
-1
=1
P
|-1
-1
. By showing that P
|-1
1
-1
a
r
we shall also
have assured that
=1
P
|-1
. We apply the Markov
Inequality (on the complementary random variable
||-|~|)
to find a lower bound on the size of ~
:
> 1, IP[|~| r
2
-3
(r - r + r
-1
- a)] 1 - 1.
Now it is clear that if
ALG claims a competitive ratio
a ( - 1)(1 - r
-2
)
2
d r
( - 1) 1 1
r
2-2
2
r
,
then at least one ball of will belong to ~
with probability
at least 1
1
. Thus, either
ALG has a >
(
-1)(1-r
-2
)
2
dr
, or
(by simply charging it only for this very specific ball)
ALG()
1
- 1 [P
|-1
+ (1
- P
|-1
)r] r
-1
w
C
max
which, combined with the demand for a-competitiveness and
the cost of
ADV for , implies that
P
|-1
1
- 1
a
r .
We finally try the following input sequence, in case that
ALG still claims a competitive ratio a
(
-1)(1-r
-2
)
d
2
r:
|| = : For this input sequence it is clear that
ADV() = ||w
C
total
=
||w
r
d-1
- r
d-2
+
n
r
d
C
max
.
For
ALG we again consider the subsequence ^ of balls that
definitely hit the first d - 1 groups of the system. Clearly
|^| = IE[|^|]
d-1
|| since we now consider an infinite
sequence of incoming balls. As for the upper bound on the
balls that the first d - 1 groups can host, this is again given
by the demand for a-competitiveness:
wB
d-1
()
C[
d-1
=1
F
] =
wB
d-1
()
r
d-2
C
max
a||w
r
d-1
- r
d-2
+
n
r
d
C
max
B
d-1
()
r
d-1
r
d-1
- r
d-2
+
n
r
d
a
r ||
The subsequence ~
^
that has to exploit a single poll
among the bins of [n] \
d-1
=1
F
has size at least
|~| = IE[|~|] IE[|^|] - B
d-1
()
d-1
r
d-1
r
d-1
- r
d-2
+
n
r
d
a
r
||
1
- (d - 1)
- 1
a
r r
d-1
r
d-1
- r
d-2
+
n
r
d
a
r
||
1
- d - 1
- 1
a
r
||
where for the last inequality we consider that n r
2
d-2
.
Since we consider that a
(
-1)(1-r
-2
)
2
dr
, we can be sure
that
|~| 1 1
+
1
r
2
|| and thus, the cost of ALG will
be lower bounded by the expected load of the bins in F
d
due
to the tasks of ~
:
a ADV() ALG()
P
d|d-1
|~| w
(r
d-1
- r
d-2
)
C
max
a
r
d-1
- r
d-2
+
n
r
d
P
d|d-1
1
(
-1)(1-r
-2
)
2
dr
d-1
- r
d-2
1
1
(
-1)(1-r
-2
)
(d-1)
1 +
n
r
2d-2
(
r-1)
1
- 1
a
r
1 - 1 - r
-2
d - 1
which is not possible for any > 1 and n r
2
d
. Thus we
conclude that
ALG cannot avoid a competitive ratio
a min
- 1
(d - 1) r,
n
r
2
d-1
+ 1
for any > 1 and n r
2
d
, which for = 2 and n = r
2
d
gives the desired bound.
DEALING WITH INPUT SEQUENCES OF KNOWN TOTAL SIZE
In this section we prove that the missing information for
an oblivious scheduler to perform efficiently is the size of
the input sequence. More specifically, considering that the
input size is provided as offline information to each of the
newly inserted tasks, we construct an oblivious scheduler
that exploits this information along with a slowfit assignment
rule and a layered induction argument for the flow of
balls from slower to faster bins, in order to achieve a strictly
constant competitive ratio with only
O(loglog n) polls per
task.
Assume that m unit-size balls are thrown into a system
of n capacitated bins with capacities C
max
=
C
n
C
n-1
C
1
=
C
min
. Assume also that each ball is allowed to
poll up to 2d bins and then it has to assign itself to one of
these candidates. We additionally assume that
C
max
n
2
d+1
.
As it will become clear later by the analysis, if this was not
the case then it could only be in favour of the oblivious
scheduler that we propose, because this would allow the absorption
of the large additive constants in the performance
guarantee of the scheduler.
We consider (wlog) that the capacity vector c is nor-malized
by
n
||c||
1
so that
n
i=1
C
i
= n. We also assume
that the total size m the input sequence is given to every
newly inserted ball.
This implies that each ball can
estimate the cost
ADV(m) opt (ie, the optimum offline
assignment of the m unit-size balls to the n capacitated
bins), and thus it can know a priori the subset of bins
that may have been used by
ADV during the whole process
.
3
Having this in mind, we can assume that every bin
in the system is legitimate, that is, it might have been
used by the optimum solution, otherwise we could have each
ball ignore all the illegitimate bins in the system. Thus,
opt
max
1
C
min
,
m
C
total
= max
1
C
min
,
m
n
. Finally, we assume
that each of the legitimate bins of the system gets at
3
A bin i [n] may have been used by ADV if and only if
1/C
i
opt.
129
least one ball in the optimum offline schedule. This does not
affect the performance of
ADV, while it may only deteriorate
the performance of an online scheduler. Nevertheless, it
assures that
m
n
opt
m
n
+ 1, meaning that the fractional
load on the bins is actually a good estimation of opt.
Let the load of bin i [n] at time t (that is, right after the
assignment of the t
th
ball of the input sequence) be denoted
as
t
(i)
q
t
(
i)
C
i
, where q
t
(i) is the number of balls assigned
to bin i up to that time. The following definition refers to
the notion of saturated bins in the system, ie, overloaded
bins wrt the designed performance guarantee of an oblivious
scheduler:
Definition
4.1. A bin i [n] is called saturated upon
the arrival of a new ball t m, if and only if it has
t-1
(i) > a opt (a), where (a) is called the designed
performance guarantee
of the oblivious scheduler.
Let
r [d], i
r
min{i [n] :
i
j=1
C
j
r
=1
n
2
}.
Then, we consider the following partition of the set of bins
[n] into d + 1 groups of (roughly geometrically) decreasing
cumulative capacities:
F
1
{1, . . . , i
1
},
F
r
{i
r-1
+ 1, . . . , i
r
}, r = 2, . . . , d,
F
d+1
{i
d
+ 1, . . . , n}.
Although the cumulative capacity of group F
r
may vary
from
n
2
r
-C
max
to
n
2
r
+
C
max
, for ease of the following computations
we assume that asymptotically
r [d], C[F
r
]
n
2
r
and C[F
d+1
]
n
2
d
. We denote by
C
min
the capacity of the
smallest bin in F
, [d + 1].
We now consider the following ideal scheduler that uses
an oblivious polling strategy and an assignment strategy
based on the slowfit rule. This scheduler (we call it
OBL
)
initially discards all the illegitimate bins in the system, using
the knowledge of m. Then first it normalizes the capacity
vector of the remaining bins and afterwards it considers the
grouping mentioned above and adopts the following pair of
strategies:
POLLING:
1 r d group F
r
gets exactly 1 poll, which
is chosen among the bins of the group proportionally to the
bin capacities. That is,
r [d], i F
r
,
IP[bin i F
r
is a candidate host of a ball]
C
i
C[F
r
] .
The remaining d polls are assigned to the bins of group F
d+1
,
either to the d fastest bins, or according to the polling strategy
of always go left, depending only on the parameters
(c, d) of the problem instance.
4
ASSIGNMENT: Upon the arrival of a new ball t [m],
the smallest polled bin from
d
=1
F
(starting from F
1
, to
F
2
, a.s.o.) that is unsaturated gets this ball (slowfit rule).
In case that all the first d polls of a ball are already saturated
, then this ball has to be assigned to a bin of F
d+1
using its remaining d candidates. Within group F
d+1
, either
the minimum post load rule (ie, the bin of minimum load
among the d choices from F
d+1
is chosen, taking into account
also the additional load of the new ball), or the slowfit rule
4
Observe that this is offline information and thus this decision
can be made at the beginning of the assignment process,
for all the balls of the input sequence.
is applied, depending on the offline parameters (c
, d) of the
problem instance. If all the 2d polled bins are saturated,
then the minimum post load rule is applied among them.
Ties are always broken in favour of smaller bins (ie, slower
machines).
The following theorem gives the performance of
OBL
,
when the additional information of the input size is also
provided offline:
Theorem
2. For lnln n - lnlnln n > d 2, when the
size of the input sequence m n is given as offline information
,
OBL
has a strict competitive ratio that drops
double-exponentially with d.In particular the cost of OBL
is (whp) at most
OBL
(m) (1 + o(1))8
n
d2
d+3
1
/(2
d+1
-1)
+ 1
ADV(m)
Proof:
Let + 1 m be the first ball in the system that
hits only saturated bins by its 2d polls. Our purpose is to
determine the value of a in the designed performance guarantee
(a), so that the probability of ball +1 existing to be
polynomially small. As stated before, the technique that we
shall employ is a layered induction argument on the number
of balls that are passed to the right of group F
r
, r [d]. For
the assignment of the balls that end up in group F
d+1
we
use a slightly modified version of the always go left scheduler
of [18] that gives an upper bound on the maximum load
in group F
d+1
(we denote this by
L
max
[F
d+1
]). This upper
bound on
L
max
[F
d+1
] holds with high probability. This assignment
is only used when it produces a smaller maximum
load than the brute assignment of all the balls ending up in
F
d+1
to the d fastest bins of the group.
We shall consider a notion of time that corresponds to the
assignments of newly arrived balls into the system: At time
t m, the t
th
ball of the input sequence is thrown into the
system and it has to be immediately assigned to one of its
2d candidates.
Consider the polls on behalf of a ball to be ordered according
to the groups from which they are taken. Observe then
that each ball t is assigned to the first unsaturated bin
that it hits from the first d groups, or to a bin in group F
d+1
.
Thus,
[d+1], each ball that has been assigned to group
F
up to (and including) ball , has definitely failed to hit
an unsaturated bin in all the groups F
1
, . . . , F
-1
. For any
ball t m and [d], let Q
t
() denote the number of balls
that have been assigned to group F
up to time t (ie, right
after the assignment of the t
th
ball), while ~
Q
t
() denotes the
balls that have been assigned to the right of group F
, that
is, to bins of [n] \
=1
F
. Thus, ~
Q
t
() =
d+1
=+1
Q
t
().
Let also S
t
() denote the set of saturated bins in F
at time
t. Then, [d], t ,
IP[t hits a saturated bin in F
] = C[S
t-1
()]
C[F
]
C[S
()]
C[F
] .
Observe now that
[d],
Q
() =
iF
q
(i)
iS
(
)
C
i
q
(i)
C
i
> C[S
()](a)
C[S
()]
C[F
]
<
Q
()
(a) C[F
] P
(1)
130
Recall that up to time , we can be sure that Q
()
(a) C[F
] (because all these balls are assigned to unsaturated
bins), which in turn assures that
P
1. Inequality
(1) implies that, had we known ~
Q
(-1), then the number
~
Q
() of balls before ball + 1 that go to the right of
set F
would be stochastically dominated by the number of
successes in ~
Q
(-1) Bernoulli trials with success probability
P
. We shall denote this number by B( ~
Q
( - 1), P
).
5
In the following lemma, we apply the Chernoff-Hoeffding
bound on these Bernoulli trials to get an upper bound on
the amount of balls that have to be assigned beyond group
F
, for [d], as a function of the number of balls that
have been assigned beyond group F
-1
:
Lemma
4.1.
[d] and for an arbitrary constant > 1,
with probability at least 1
- n
~
Q
() max
2
+1
[ ~
Q
( - 1)]
2
(a)n
,
2 ln n ~
Q
( - 1) .
Proof:
Let's assume that we already know the number
~
Q
( - 1) of balls that have already failed in F
1
, . . . , F
-1
.
Then ~
Q
() is stochastically dominated by the random variable
B( ~
Q
( - 1), P
).
By applying Chernoff-Hoeffding
bounds ([11], p. 198) on these Bernoulli trials, we get that
IP[B( ~
Q
( - 1), P
)
~
Q
( - 1) (P
+ )]
exp(-2 ~
Q
( - 1)
2
), > 0
IP B( ~
Q
( - 1), P
)
~
Q
( - 1) P
+
ln n
2 ~
Q
(
-1)
n
, > 0
where we have set =
ln n
2 ~
Q
(
-1)
. But recall that
P
=
Q
(
)
(
a)C[F
]
and also that C[F
]
n
2
. Thus we conclude that
with probability at least 1
- n
,
~
Q
() B( ~
Q
( - 1), P
)
~
Q
( - 1)
2
Q
(
)
(
a)n
+
ln n
2 ~
Q
(
-1)
Q
() = ~
Q
( - 1) - ~
Q
()
~
Q
() ~
Q
( - 1)
2
[ ~
Q
( - 1) - ~
Q
()]
(a)n
+
ln n
2 ~
Q
( - 1)
~
Q
() 1 + 2
~
Q
( - 1)
(a)n
~
Q
( - 1)
2
~
Q
( - 1)
(a)n
+
ln n
2 ~
Q
( - 1)
~
Q
()
2
[ ~
Q
( - 1)]
2
+ (a)n
ln n
2
~
Q
( - 1)
(a)n + 2
~
Q
( - 1)
from which we get the desired bound.
Consider now the following finite sequence
^
Q() = max
2
+1
[ ^
Q(-1)]
2
(
a)n
,
2 ln n ^
Q
( - 1) , [d]
^
Q(0) = m
5
For the integrity of the representation we set ~
Q
(0) =
m - 1.
We then bound the number of balls that end up in group
F
d+1
by the d
th
term of this sequence:
Lemma
4.2. With probability at least 1
- dn
, at most
^
Q(d) balls end up in group F
d+1
.
Proof:
The proof of this lemma is relatively simple and
due to lack of space it is differed to the full version of the
paper.
Consequently we estimate a closed form for the first terms
of the bounding sequence
^
Q(), = 0, . . . , d that was
determined earlier:
Lemma
4.3. The first log
log
m
ln n
log(
a/8)
+2 terms of the bounding
sequence
^
Q(), = 0, . . . , d are given by the closed
form ^
Q() =
m
2
(
a
8
)
2-1
.
Proof:
Assume that
was the first element in the sequence
for which
2
+1
[ ^
Q(
- 1)]
2
(a)n
<
2 ln n ^
Q(
- 1)
(2)
Then, up to term
- 1 the sequence is dominated by the
right-hand term of the above inequality, and thus,
r <
,
^
Q(r) =
2
r+1
(a)n [
^
Q(r - 1)]
2
= 2
r+1
(a)n
2
r
(a)n
2
[ ^
Q(r - 2)]
4
=
2
r+1
(a)n
2
0
2
r
(a)n
2
1
2
r-1
(a)n
2
2
[ ^
Q(r - 3)]
2
3
=
=
r
=1
2
2
r+2
(a)n
2
-1
[ ^
Q(0)]
2
r
=
2
r
=1
(
r+2-)2
-1
1
(a)n
r
=1
2
-1
[ ^
Q(0)]
2
r
=
2
3
2
r
-r-3
(
a)n
m
2
r
-1
m
m
2
r
a
8
2
r
-1
since
(
a)n
8
m
a
8
. We now plug in this closed form for ^
Q(
1
) in inequality (2) to get the following:
2 ln n >
2
+1
(a)n
m
2
-1 a8 2
-1
-1
3
/2
m
3
/2
2 lnn < (a)n
2
+1
2
3
2
(
-1)
(a)n
8m
2
-1
-1
m
2 ln n
< 2
1
2
(
+3)
a
8
2
-1
From the above and the definition of
, it is easy to see that
-1 = max r [d] :
ln
m-ln -lnln n
ln 2
- 4 r + 2
r-2
ln
a
8
.
By setting A = log m lnln
n+ln
ln 2
- 4 and B = ln
a
8
we get
the solution
A W
[
B ln 2
4
exp(
A ln 2)
]
ln 2
+ 1, where W (x)
is the Lambert W Function ([7]). By approximating this
function by ln x - lnln x (since x =
B ln 2
4
exp(A ln 2) )
we conclude that
log log
m
ln n
log(a/8)
+ 3
131
Lemma
4.4. Assume that m n and d <
lnln
n-lnlnln n
(2+1) ln 2
.
If we set a = 8
n
d2
d+3
1
/(2
d+1
-1)
the cost of
OBL
is upper
bounded (whp) by
OBL
(m) (1 + o(1))(a + 1)ADV(m).
Proof:
The cost of
OBL
up to time is upper-bounded
by max
{L
max
[F
d+1
], (a + 1) opt} since in the first d groups
no saturated bin ever gets another ball and all the bins are
legitimate. We choose a so that the cost in the first d groups
is at least as large as
L
max
[F
d+1
] (whp), given the upper
bound ^
Q(d) on the number of balls that end up in the last
group. Thus the probability of + 1 existing will be then
polynomially small because at least one poll in F
d+1
will
have to be unsaturated whp. This then implies the claimed
upper bound on the performance of
OBL
. Due to lack of
space, the complete proof of this lemma is presented in the
full version of the paper.
Combining the statements of all these technical lemmas, we
conclude with the desired bound on the competitive ratio of
OBL
.
A COMPETITIVE HAPS SCHEDULER
In the previous section we have proposed a hops scheduler
that is based on the knowledge of the size of the input
sequence and then assures that its own performance is never
worse than (a + 1)opt, whp. In this section we propose a
haps
scheduler (call it
ADAPT ) whose main purpose is
to "guess" the value opt of the optimum offline cost by a
classical guessing argument and then let
OBL
do the rest
of the assignments. This approach is in complete analogy
with the online schedulers of the Related Machines-Full
Information problem (see [6], pp. 208-210). The only difference
is that
OBL
has a performance which holds whp,
and this is why the final result also holds whp.
A significant difficulty of
ADAPT is exactly this guessing
mechanism that will have to be based on the limited information
provided to each of the new tasks. Our goal is not to
assume that any kind of additional information (eg, global
environment variables) is provided to the balls, other than
the capacity vector and the current loads of each ball's candidates
.
ADAPT sacrifices one of the available polls per
ball, in order to create such a good guessing mechanism.
Of course, a different approach that would be based on the
outcome of some of the polls (eg, a constant fraction of the
polls) in order to estimate a proper online prediction of opt
would be more interesting in the sense that it would not
create a communication bottleneck for the tasks. Nevertheless
, the purpose of
ADAPT is mainly to demonstrate the
possibility of constructing such an adaptive scheduler whose
performance is close to that of
OBL
.
Let's assume that the system now has n + 1 capacitated
bins (
C
min
=
C
1
C
2
C
n+1
=
C
max
). Assume also
that each new task is provided with 2d + 1 polls. Fix a
number r > 1. Then an r-guessing mechanism proceeds
in stages: Stage ( = 0, 1, . . . ) contains a (consecutive)
subsequence of tasks from the input sequence that use the
same prediction
= r
0
for their assignment. We set
0
=
1
C
max
. The following definitions refer to the notions
of eligible and saturated machines upon the arrival of a
new task t [m] into the system, that are used in a similar
fashion as in [5] where the concept of eligible-saturated
machines was introduced:
Definition
5.1. Suppose that task t m belongs to stage
.The set of eligible machines for t is
E
i [n] : 1
C
i
= r
0
,
while a machine i [n] is considered to be saturated upon
the arrival of task t, if
t-1
(i) > a
= r
a
0
, where
a
= 8
max 1,
|E
|
d2
d+3
1
/(2
d+1
-1)
.
Notice that the static information of (E
, a
), = 0, 1, . . .
only depends on the capacity vector c and the number d of
polls per task. Thus it can be easily computed by each of
the tasks, or alternatively it can be provided a priori to all
the tasks as additional offline information.
ADAPT proceeds in phases and works as follows: Upon
the arrival of a new ball t [m], first the fastest bin in the
system is polled and the stage s(t) to which this ball belongs
is determined, according to the following rule:
s(t) = IN : r
-1
a
-1
+ 1 < q
t-1
(n + 1) r
a
+ 1
(obviously for stage 0 only the second inequality must hold).
The remaining 2d polls of task t are taken from group E
s(t)
in a fashion similar to that of
OBL
. The assignment strategy
of
ADAPT is exactly the same with that of OBL
, with
the only difference that whenever the first 2d candidates of
a task are already saturated, then this task is assigned to
the fastest machine in the system, n + 1. If the latter event
causes machine n + 1 to become also saturated, then by the
definition of the stages this task is the last ball of the current
stage and a new stage (with an r times larger prediction)
starts from the next task on.
Lemma
5.1. Suppose that
ADAPT uses r = 9/4.Let
h denote the stage at which the prediction
h
of
ADAPT
reaches opt for the first time.Then the last stage of
ADAPT
is at most h + 1.
Proof:
The case of h = 0 is easy since this implies that
opt = 1/C
max
and then
ADAPT cannot be worse than
OBL
which (having the right prediction) succeeds to assign
all the incoming tasks (but for those that might fit in machine
n +1 without making it saturated) below (a
0
+ 1)
opt,
whp. So let's consider the case where h 1.
Let E
denote the set of legitimate machines in the system
(ie, E
=
{i [n + 1] : 1/C
i
opt}). The amount
of work inserted into the system during the whole assignment
process is bounded by C[E
]
opt. By definition of
h, r
h-1
/C
max
< opt r
h
/C
max
. Stage h + 1 ends when
machine n + 1 becomes saturated. Let W () denote the
total amount of work assigned during stage , while R()
denotes the amount of remaining work at the end of stage
. We shall prove that the amount of remaining work at the
end of stage h is not enough to make OBL
fail within stage
h + 1.
Observe that at the end of stage h, no machine has exceeded
a load of (a
h
+ 1)
h
= (a
h
+ 1)r
h
/C
max
(by definition
of stages). On the other hand, during stage h + 1, each of
the eligible machines i E
h+1
needs a load of more than
a
h+1
h+1
= a
h+1
r
h+1
/C
max
in order to become saturated
for this stage. We denote the additional free space of bin
132
i E
h+1
by f ree
i
(h+1) >
a
h+1
r
h+1
C
max
(
a
h
+1)
r
h
C
max
C
i
, while
F REE(h + 1)
iE
h+1
f ree
i
(h + 1)
>
a
h+1
r
h+1
C
max
- (a
h
+ 1)r
h
C
max
C[E
h+1
]
>
a
h+1
r
h+1
C
max
1 - a
h
+ 1
r a
h+1
C[E
h+1
]
> (a
h+1
+ 1)optC[E
h+1
]
ra
h+1
a
h+1
+ 1 a
h
+ 1
a
h+1
+ 1
(3)
is the cumulative free space granted to the eligible bins of
the system for stage h + 1. It only remains to prove that the
amount of work that has to be dealt with by
OBL
using
only the bins of [n] is less than F REE(h+1)/(a
h+1
+1). Observe
now that
OBL
uses the bins of E
h+1
and has to deal
with an amount of work less than W (h + 1) - f ree
n+1
(h +
1) < R(h)-a
h+1
r
h
C
max
r a
h
+1
a
h+1
C
max
< C[E
]opt
-a
h+1
(r-2
)opt
C
max
< opt(C[E
]
-(r-2)a
h+1
C
max
) < (C[E
]
-C
max
)
opt if we set r 17/8. This is because a
h+1
a
h
8 and
at the end of stage h+1 bin n+1 must have already become
saturated. Observe also that E
h+1
E
h
E
\ {n + 1} by
definition of h. Thus, C[E
h+1
]
C[E
]
- C
max
. Thus, the
amount of work that has to be served by
OBL
during stage
h + 1 is less than opt C[E
h+1
]. Then, setting r = 9/4 in
inequality (3) assures that
ra
h+1
a
h+1
+1
a
h
+1
a
h+1
+1
1 (recall
that a
h+1
a
h
8) and thus the remaining work at the
end of stage h is not enough to make OBL
fail.
The following theorem is now a straightforward consequence
of the previous lemma:
Theorem
3. For any input sequence of identical tasks
that have to be assigned to n related machines using at most
2d + 1 polls per task, the cost of ADAPT is (whp),
ADAPT () < O
n
d2
d+3
1
/(2
d+1
-1)
+ 1
ADV().
CONCLUSIONS
In this work we have studied the problem of exploiting limited
online information for the assignment of a sequence of
unit-size tasks to related machines. We have shown that the
oblivious schedulers that perform asymptotically optimally
in the case of identical machines, deteriorate significantly
in this case. We have then determined an adaptive scheduler
that actually mimics the behaviour of an ideal oblivious
scheduler, in order to achieve roughly the asymptotically optimal
performance similar to the case of the identical machines
.
As for further research, the issue of providing only limited
information to online algorithms is critical in many problems
for which an objective is also the minimization of the communication
cost. In this category fall most of the network
design problems. Another interesting line of research would
be the avoidance of communication bottlenecks for such limited
information online algorithms. Additionally, it would
be very interesting to study the case of tasks of arbitrary
sizes being injected into the system.
ACKNOWLEDGMENTS
The author wishes to thank Paul Spirakis for valuable
discussions during the write-up of the paper and also for
suggesting an appropriate terminology on the categorization
of polling strategies.
REFERENCES
[1] J. Aspens, Y. Azar, A. Fiat, S. Plotkin, O. Waarts. Online
Machine Scheduling with Applications to Load Balancing
and Virtual Circuit Routing. In Journal of ACM, Vol. 44
(1997), pp. 486-504.
[2] Y. Azar, A. Broder, A. Karlin, E. Upfal. Ballanced
Allocations. In Proc. of 26th ACM-STOC (1994), pp.
593-602. Also in SIAM J. Computing, 29 (1999), pp.
180-200.
[3] P. Berman, M. Charikar, M. Karpinski. Online Load
Balancing for Related Machines. In Proc. of 5th Int.
Workshop on Algorithms and Data Structures (1997),
LNCS 1272, Springer-Verlag 1997, pp. 116-125.
[4] P. Berenbrink, A. Czumaj, A. Steger, and B. V
ocking.
Balanced Allocations: The Heavily Loaded Case. In Proc.
of 32
nd
ACM-STOC (Portland, 2000), pp. 745-754.
[5] A. Bar-Noy, A. Freund, J. Naor. NewAlgorithms for
Related Machines with Temporary Jobs. In Journal of
Scheduling, Vol. 3 (2000), pp. 259-272.
[6] A. Borodin, R. El Yaniv. Online Computation and
Competitive Analysis. Cambridge Univ. Press 1998.
[7] R. Corless, G. Gonnet, D. Hare, D. Jeffrey, D. Knuth. On
the Lambert W Function. In Advances in Computational
Mathematics, Vol. 5 (1996), pp. 329-359.
[8] A. Czumaj and B. V
ocking. Tight Bounds for Worst-Case
Equilibria. In Proc. of 13
th
ACM-SIAM SODA (San
Francisco, 2002), pp. 413-420.
[9] Y. Azar. Online Load Balancing. In Online Algorithms:
The State of the Art, A. Fiat, G. Woeginger, (Eds.). LNCS
1442, Springer 1998, pp. 178-195.
[10] G. Gonnet. Expected Length of the Longest Probe
Sequence in Hash Code Searching. In J. of ACM, Vol. 28,
No. 2 (1981), pp. 289-304.
[11] M. Habib, C. McDiarmid, J. Ramirez-Alfonsin, B. Reed
(Eds.). Probabilistic Methods for Algorithmic Discrete
Mathematics. ISBN: 3-540-64622-1, Springer-Velrag 1998.
[12] E. Koutsoupias, C. Papadimitriou. Worst-case Equilibria.
In Proc. of 16
th
Annual Symposium on Theoretical
Aspects of Computer Science (STACS), LNCS 1563,
Springer-Verlag 1999, pp. 404-413.
[13] M. Mavronicolas, P. Spirakis. The Price of Selfish Routing.
In Proc. of 33
rd
ACM-STOC (Krete, 2001), pp. 510-519.
[14] M. Mitzenmacher, A. Richa, R. Sitaraman. The Power of
Two Random Choices: A Survey of Techniques and
Results. In Handbook of Randomized Algorithms (to
appear). Also available through
http://www.eecs.harvard.edu/ michaelm/.
[15] A. Steger, M. Raab. "Balls into Bins" - A Simple and
Tight Analysis. In Proc. of RANDOM'98, LNCS 1518,
Springer Verlag, 1998, pp. 159-170.
[16] J. Sgall. Online Scheduling. In Online Algorithms: The
State of the Art, A. Fiat, G. Woeginger (Eds.). LNCS
1442, Springer 1998, pp. 196-231.
[17] D. Shmoys, J. Wein, D. Williamson. Scheduling Parallel
Machines Online. In Proc. of the 32
nd
IEEE-FOCS (1991),
pp. 131-140.
[18] B. V
ocking. HowAsymmetry Helps Load Balancing. In
Proc. of 40
th
IEEE-FOCS (NewYork, 1999), pp. 131-140.
133
| oblivious scheduler;HOPS;related machines;Limited Information;lower bounds;online information;scheduling;Online Load Balancing;input sequence;unit-size task;Related Machines |
132 | Machine Learning for Information Architecture in a Large Governmental Website | This paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries. Under the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection. The goal of this discovery is twofold. First we desire a practical aid for information architects. Second, automatically derived document-concept relationships are a necessary precondition for real-world deployment of many dynamic interfaces. The current study compares concept learning strategies based on three document representations: keywords, titles, and full-text. In statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations. Categories and Subject Descriptors | INTRODUCTION
The GovStat Project is a joint effort of the University
of North Carolina Interaction Design Lab and the University
of Maryland Human-Computer Interaction Lab
1
. Citing
end-user difficulty in finding governmental information
(especially statistical data) online, the project seeks to create
an integrated model of user access to US government
statistical information that is rooted in realistic data models
and innovative user interfaces. To enable such models
and interfaces, we propose a data-driven approach, based
on data mining and machine learning techniques. In particular
, our work analyzes a particular digital library--the
website of the Bureau of Labor Statistics
2
(BLS)--in efforts
to discover a small number of linguistically meaningful concepts
, or "bins," that collectively summarize the semantic
domain of the site.
The project goal is to classify the site's web content according
to these inferred concepts as an initial step towards
data filtering via active user interfaces (cf.
[13]).
Many
digital libraries already make use of content classification,
both explicitly and implicitly; they divide their resources
manually by topical relation; they organize content into hi-erarchically
oriented file systems. The goal of the present
1
http://www.ils.unc.edu/govstat
2
http://www.bls.gov
151
research is to develop another means of browsing the content
of these collections. By analyzing the distribution of terms
across documents, our goal is to supplement the agency's
pre-existing information structures. Statistical learning technologies
are appealing in this context insofar as they stand
to define a data-driven--as opposed to an agency-driven-navigational
structure for a site.
Our approach combines supervised and unsupervised learning
techniques. A pure document clustering [12] approach
to such a large, diverse collection as BLS led to poor results
in early tests [6]. But strictly supervised techniques [5] are
inappropriate, too. Although BLS designers have defined
high-level subject headings for their collections, as we discuss
in Section 2, this scheme is less than optimal. Thus we
hope to learn an additional set of concepts by letting the
data speak for themselves.
The remainder of this paper describes the details of our
concept discovery efforts and subsequent evaluation. In Section
2 we describe the previously existing, human-created
conceptual structure of the BLS website. This section also
describes evidence that this structure leaves room for improvement
. Next (Sections 35), we turn to a description
of the concepts derived via content clustering under three
document representations: keyword, title only, and full-text.
Section 6 describes a two-part evaluation of the derived conceptual
structures. Finally, we conclude in Section 7 by outlining
upcoming work on the project.
STRUCTURING ACCESS TO THE BLS WEBSITE
The Bureau of Labor Statistics is a federal government
agency charged with compiling and publishing statistics pertaining
to labor and production in the US and abroad. Given
this broad mandate, the BLS publishes a wide array of information
, intended for diverse audiences.
The agency's
website acts as a clearinghouse for this process. With over
15,000 text/html documents (and many more documents if
spreadsheets and typeset reports are included), providing
access to the collection provides a steep challenge to information
architects.
2.1
The Relation Browser
The starting point of this work is the notion that access
to information in the BLS website could be improved by
the addition of a dynamic interface such as the relation
browser described by Marchionini and Brunk [13]. The relation
browser allows users to traverse complex data sets by
iteratively slicing the data along several topics. In Figure
1 we see a prototype instantiation of the relation browser,
applied to the FedStats website
3
.
The relation browser supports information seeking by allowing
users to form queries in a stepwise fashion, slicing and
re-slicing the data as their interests dictate. Its motivation
is in keeping with Shneiderman's suggestion that queries
and their results should be tightly coupled [2]. Thus in Fig-3
http://www.fedstats.gov
Figure 1: Relation Browser Prototype
ure 1, users might limit their search set to those documents
about "energy." Within this subset of the collection, they
might further eliminate documents published more than a
year ago. Finally, they might request to see only documents
published in PDF format.
As Marchionini and Brunk discuss, capturing the publication
date and format of documents is trivial. But successful
implementations of the relation browser also rely on topical
classification. This presents two stumbling blocks for system
designers:
Information architects must define the appropriate set
of topics for their collection
Site maintainers must classify each document into its
appropriate categories
These tasks parallel common problems in the metadata
community: defining appropriate elements and marking up
documents to support metadata-aware information access.
Given a collection of over 15,000 documents, these hurdles
are especially daunting, and automatic methods of approaching
them are highly desirable.
2.2
A Pre-Existing Structure
Prior to our involvement with the project, designers at
BLS created a shallow classificatory structure for the most
important documents in their website. As seen in Figure 2,
the BLS home page organizes 65 "top-level" documents into
15 categories. These include topics such as Employment and
Unemployment, Productivity, and Inflation and Spending.
152
Figure 2: The BLS Home Page
We hoped initially that these pre-defined categories could
be used to train a 15-way document classifier, thus automating
the process of populating the relation browser altogether.
However, this approach proved unsatisfactory. In personal
meetings, BLS officials voiced dissatisfaction with the existing
topics. Their form, it was argued, owed as much to
the institutional structure of BLS as it did to the inherent
topology of the website's information space. In other words,
the topics reflected official divisions rather than semantic
clusters. The BLS agents suggested that re-designing this
classification structure would be desirable.
The agents' misgivings were borne out in subsequent analysis
. The BLS topics comprise a shallow classificatory structure
; each of the 15 top-level categories is linked to a small
number of related pages. Thus there are 7 pages associated
with Inflation. Altogether, the link structure of this classificatory
system contains 65 documents; that is, excluding
navigational links, there are 65 documents linked from the
BLS home page, where each hyperlink connects a document
to a topic (pages can be linked to multiple topics). Based on
this hyperlink structure, we defined M, a symmetric 65 65
matrix, where m
ij
counts the number of topics in which documents
i and j are both classified on the BLS home page. To
analyze the redundancy inherent in the pre-existing structure
, we derived the principal components of M (cf. [11]).
Figure 3 shows the resultant scree plot
4
.
Because all 65 documents belong to at least one BLS topic,
4
A scree plot shows the magnitude of the k
th
eigenvalue
versus its rank. During principal component analysis scree
plots visualize the amount of variance captured by each component
.
0
10
20
30
40
50
60
0
2
4
6
8
10
12
14
Eigenvalue Rank
Eigenvlue Magnitude
Figure 3: Scree Plot of BLS Categories
the rank of M is guaranteed to be less than or equal to
15 (hence, eigenvalues 16 . . . 65 = 0). What is surprising
about Figure 3, however, is the precipitous decline in magnitude
among the first four eigenvalues. The four largest
eigenvlaues account for 62.2% of the total variance in the
data. This fact suggests a high degree of redundancy among
the topics. Topical redundancy is not in itself problematic.
However, the documents in this very shallow classificatory
structure are almost all gateways to more specific information
. Thus the listing of the Producer Price Index under
three categories could be confusing to the site's users. In
light of this potential for confusion and the agency's own request
for redesign, we undertook the task of topic discovery
described in the following sections.
A HYBRID APPROACH TO TOPIC DISCOVERY
To aid in the discovery of a new set of high-level topics for
the BLS website, we turned to unsupervised machine learning
methods. In efforts to let the data speak for themselves,
we desired a means of concept discovery that would be based
not on the structure of the agency, but on the content of the
material. To begin this process, we crawled the BLS website
, downloading all documents of MIME type text/html.
This led to a corpus of 15,165 documents. Based on this
corpus, we hoped to derive k 10 topical categories, such
that each document d
i
is assigned to one or more classes.
153
Document clustering (cf. [16]) provided an obvious, but
only partial solution to the problem of automating this type
of high-level information architecture discovery. The problems
with standard clustering are threefold.
1. Mutually exclusive clusters are inappropriate for identifying
the topical content of documents, since documents
may be about many subjects.
2. Due to the heterogeneity of the data housed in the
BLS collection (tables, lists, surveys, etc.), many doc-uments'
terms provide noisy topical information.
3. For application to the relation browser, we require a
small number (k 10) of topics. Without significant
data reduction, term-based clustering tends to deliver
clusters at too fine a level of granularity.
In light of these problems, we take a hybrid approach to
topic discovery.
First, we limit the clustering process to
a sample of the entire collection, described in Section 4.
Working on a focused subset of the data helps to overcome
problems two and three, listed above. To address the problem
of mutual exclusivity, we combine unsupervised with
supervised learning methods, as described in Section 5.
FOCUSING ON CONTENT-RICH DOCUMENTS
To derive empirically evidenced topics we initially turned
to cluster analysis. Let A be the np data matrix with n observations
in p variables. Thus a
ij
shows the measurement
for the i
th
observation on the j
th
variable. As described
in [12], the goal of cluster analysis is to assign each of the
n observations to one of a small number k groups, each of
which is characterized by high intra-cluster correlation and
low inter-cluster correlation. Though the algorithms for accomplishing
such an arrangement are legion, our analysis
focuses on k-means clustering
5
, during which, each observation
o
i
is assigned to the cluster C
k
whose centroid is closest
to it, in terms of Euclidean distance. Readers interested in
the details of the algorithm are referred to [12] for a thorough
treatment of the subject.
Clustering by k-means is well-studied in the statistical
literature, and has shown good results for text analysis (cf.
[8, 16]). However, k-means clustering requires that the researcher
specify k, the number of clusters to define. When
applying k-means to our 15,000 document collection, indicators
such as the gap statistic [17] and an analysis of
the mean-squared distance across values of k suggested that
k 80 was optimal. This paramterization led to semantically
intelligible clusters. However, 80 clusters are far too
many for application to an interface such as the relation
5
We have focused on k-means as opposed to other clustering
algorithms for several reasons.
Chief among these is the
computational efficiency enjoyed by the k-means approach.
Because we need only a flat clustering there is little to be
gained by the more expensive hierarchical algorithms. In
future work we will turn to model-based clustering [7] as a
more principled method of selecting the number of clusters
and of representing clusters.
browser. Moreover, the granularity of these clusters was un-suitably
fine. For instance, the 80-cluster solution derived
a cluster whose most highly associated words (in terms of
log-odds ratio [1]) were drug, pharmacy, and chemist. These
words are certainly related, but they are related at a level
of specificity far below what we sought.
To remedy the high dimensionality of the data, we resolved
to limit the algorithm to a subset of the collection.
In consultation with employees of the BLS, we continued
our analysis on documents that form a series titled From
the Editor's Desk
6
. These are brief articles, written by BLS
employees. BLS agents suggested that we focus on the Editor's
Desk because it is intended to span the intellectual
domain of the agency. The column is published daily, and
each entry describes an important current issue in the BLS
domain. The Editor's Desk column has been written daily
(five times per week) since 1998. As such, we operated on a
set of N = 1279 documents.
Limiting attention to these 1279 documents not only reduced
the dimensionality of the problem. It also allowed
the clustering process to learn on a relatively clean data set.
While the entire BLS collection contains a great deal of non-prose
text (i.e. tables, lists, etc.), the Editor's Desk documents
are all written in clear, journalistic prose. Each document
is highly topical, further aiding the discovery of term-topic
relations. Finally, the Editor's Desk column provided
an ideal learning environment because it is well-supplied
with topical metadata. Each of the 1279 documents contains
a list of one or more keywords. Additionally, a subset
of the documents (1112) contained a subject heading. This
metadata informed our learning and evaluation, as described
in Section 6.1.
COMBINING SUPERVISED AND UNSUPERVISED LEARNING FOR TOPIC DISCOVERY
To derive suitably general topics for the application of a
dynamic interface to the BLS collection, we combined document
clustering with text classification techniques. Specif-ically
, using k-means, we clustered each of the 1279 documents
into one of k clusters, with the number of clusters
chosen by analyzing the within-cluster mean squared distance
at different values of k (see Section 6.1). Constructing
mutually exclusive clusters violates our assumption that
documents may belong to multiple classes. However, these
clusters mark only the first step in a two-phase process of
topic identification. At the end of the process, document-cluster
affinity is measured by a real-valued number.
Once the Editor's Desk documents were assigned to clusters
, we constructed a k-way classifier that estimates the
strength of evidence that a new document d
i
is a member
of class C
k
. We tested three statistical classification techniques
: probabilistic Rocchio (prind), naive Bayes, and support
vector machines (SVMs). All were implemented using
McCallum's BOW text classification library [14]. Prind is a
probabilistic version of the Rocchio classification algorithm
[9]. Interested readers are referred to Joachims' article for
6
http://www.bls.gov/opub/ted
154
further details of the classification method. Like prind, naive
Bayes attempts to classify documents into the most probable
class. It is described in detail in [15]. Finally, support
vector machines were thoroughly explicated by Vapnik [18],
and applied specifically to text in [10]. They define a decision
boundary by finding the maximally separating hyperplane
in a high-dimensional vector space in which document
classes become linearly separable.
Having clustered the documents and trained a suitable
classifier, the remaining 14,000 documents in the collection
are labeled by means of automatic classification. That is, for
each document d
i
we derive a k-dimensional vector, quantifying
the association between d
i
and each class C
1
. . . C
k
.
Deriving topic scores via naive Bayes for the entire 15,000-document
collection required less than two hours of CPU
time. The output of this process is a score for every document
in the collection on each of the automatically discovered
topics. These scores may then be used to populate a
relation browser interface, or they may be added to a traditional
information retrieval system. To use these weights in
the relation browser we currently assign to each document
the two topics on which it scored highest. In future work we
will adopt a more rigorous method of deriving document-topic
weight thresholds. Also, evaluation of the utility of
the learned topics for users will be undertaken.
EVALUATION OF CONCEPT DISCOVERY
Prior to implementing a relation browser interface and
undertaking the attendant user studies, it is of course important
to evaluate the quality of the inferred concepts, and
the ability of the automatic classifier to assign documents
to the appropriate subjects. To evaluate the success of the
two-stage approach described in Section 5, we undertook
two experiments. During the first experiment we compared
three methods of document representation for the clustering
task. The goal here was to compare the quality of document
clusters derived by analysis of full-text documents,
documents represented only by their titles, and documents
represented by human-created keyword metadata. During
the second experiment, we analyzed the ability of the statistical
classifiers to discern the subject matter of documents
from portions of the database in addition to the Editor's
Desk.
6.1
Comparing Document Representations
Documents from The Editor's Desk column came supplied
with human-generated keyword metadata. Additionally
, The titles of the Editor's Desk documents tend to be
germane to the topic of their respective articles. With such
an array of distilled evidence of each document's subject
matter, we undertook a comparison of document representations
for topic discovery by clustering. We hypothesized
that keyword-based clustering would provide a useful model.
But we hoped to see whether comparable performance could
be attained by methods that did not require extensive human
indexing, such as the title-only or full-text representations
. To test this hypothesis, we defined three modes of
document representation--full-text, title-only, and keyword
only--we generated three sets of topics, T
f ull
, T
title
, and
T
kw
, respectively.
Topics based on full-text documents were derived by application
of k-means clustering to the 1279 Editor's Desk documents
, where each document was represented by a 1908-dimensional
vector.
These 1908 dimensions captured the
TF.IDF weights [3] of each term t
i
in document d
j
, for all
terms that occurred at least three times in the data. To arrive
at the appropriate number of clusters for these data, we
inspected the within-cluster mean-squared distance for each
value of k = 1 . . . 20. As k approached 10 the reduction in
error with the addition of more clusters declined notably,
suggesting that k 10 would yield good divisions. To select
a single integer value, we calculated which value of k led
to the least variation in cluster size. This metric stemmed
from a desire to suppress the common result where one large
cluster emerges from the k-means algorithm, accompanied
by several accordingly small clusters.
Without reason to
believe that any single topic should have dramatically high
prior odds of document membership, this heuristic led to
k
f ull
= 10.
Clusters based on document titles were constructed simi-larly
. However, in this case, each document was represented
in the vector space spanned by the 397 terms that occur
at least twice in document titles. Using the same method
of minimizing the variance in cluster membership k
title
the
number of clusters in the title-based representationwas also
set to 10.
The dimensionality of the keyword-based clustering was
very similar to that of the title-based approach. There were
299 keywords in the data, all of which were retained. The
median number of keywords per document was 7, where a
keyword is understood to be either a single word, or a multi-word
term such as "consumer price index." It is worth noting
that the keywords were not drawn from any controlled vocabulary
; they were assigned to documents by publishers at
the BLS. Using the keywords, the documents were clustered
into 10 classes.
To evaluate the clusters derived by each method of document
representation, we used the subject headings that were
included with 1112 of the Editor's Desk documents. Each
of these 1112 documents was assigned one or more subject
headings, which were withheld from all of the cluster applications
. Like the keywords, subject headings were assigned
to documents by BLS publishers. Unlike the keywords, however
, subject headings were drawn from a controlled vocabulary
. Our analysis began with the assumption that documents
with the same subject headings should cluster together
. To facilitate this analysis, we took a conservative
approach; we considered multi-subject classifications to be
unique. Thus if document d
i
was assigned to a single subject
prices, while document d
j
was assigned to two subjects,
international comparisons, prices, documents d
i
and d
j
are
not considered to come from the same class.
Table 1 shows all Editor's Desk subject headings that were
assigned to at least 10 documents. As noted in the table,
155
Table 1: Top Editor's Desk Subject Headings
Subject
Count
prices
92
unemployment
55
occupational safety & health
53
international comparisons, prices
48
manufacturing, prices
45
employment
44
productivity
40
consumer expenditures
36
earnings & wages
27
employment & unemployment
27
compensation costs
25
earnings & wages, metro. areas
18
benefits, compensation costs
18
earnings & wages, occupations
17
employment, occupations
14
benefits
14
earnings & wage, regions
13
work stoppages
12
earnings & wages, industries
11
Total
609
Table 2:
Contingecy Table for Three Document
Representations
Representation
Right
Wrong
Accuracy
Full-text
392
217
0.64
Title
441
168
0.72
Keyword
601
8
0.98
there were 19 such subject headings, which altogether covered
609 (54%) of the documents with subjects assigned.
These document-subject pairings formed the basis of our
analysis. Limiting analysis to subjects with N > 10 kept
the resultant
2
tests suitably robust.
The clustering derived by each document representation
was tested by its ability to collocate documents with the
same subjects. Thus for each of the 19 subject headings
in Table 1, S
i
, we calculated the proportion of documents
assigned to S
i
that each clustering co-classified. Further,
we assumed that whichever cluster captured the majority of
documents for a given class constituted the "right answer"
for that class. For instance, There were 92 documents whose
subject heading was prices. Taking the BLS editors' classifications
as ground truth, all 92 of these documents should
have ended up in the same cluster. Under the full-text representation
52 of these documents were clustered into category
5, while 35 were in category 3, and 5 documents were in category
6. Taking the majority cluster as the putative right
home for these documents, we consider the accuracy of this
clustering on this subject to be 52/92 = 0.56. Repeating
this process for each topic across all three representations
led to the contingency table shown in Table 2.
The obvious superiority of the keyword-based clustering
evidenced by Table 2 was borne out by a
2
test on the
accuracy proportions. Comparing the proportion right and
Table 3: Keyword-Based Clusters
benefits
costs
international
jobs
plans
compensation
import
employment
benefits
costs
prices
jobs
employees
benefits
petroleum
youth
occupations
prices
productivity
safety
workers
prices
productivity
safety
earnings
index
output
health
operators
inflation
nonfarm
occupational
spending
unemployment
expenditures
unemployment
consumer
mass
spending
jobless
wrong achieved by keyword and title-based clustering led to
p
0.001. Due to this result, in the remainder of this paper,
we focus our attention on the clusters derived by analysis of
the Editor's Desk keywords. The ten keyword-based clusters
are shown in Table 3, represented by the three terms most
highly associated with each cluster, in terms of the log-odds
ratio. Additionally, each cluster has been given a label by
the researchers.
Evaluating the results of clustering is notoriously difficult.
In order to lend our analysis suitable rigor and utility, we
made several simplifying assumptions. Most problematic is
the fact that we have assumed that each document belongs
in only a single category. This assumption is certainly false.
However, by taking an extremely rigid view of what constitutes
a subject--that is, by taking a fully qualified and
often multipart subject heading as our unit of analysis--we
mitigate this problem. Analogically, this is akin to considering
the location of books on a library shelf. Although a
given book may cover many subjects, a classification system
should be able to collocate books that are extremely similar,
say books about occupational safety and health. The most
serious liability with this evaluation, then, is the fact that
we have compressed multiple subject headings, say prices :
international into single subjects. This flattening obscures
the multivalence of documents. We turn to a more realistic
assessment of document-class relations in Section 6.2.
6.2
Accuracy of the Document Classifiers
Although the keyword-based clusters appear to classify
the Editor's Desk documents very well, their discovery only
solved half of the problem required for the successful implementation
of a dynamic user interface such as the relation
browser. The matter of roughly fourteen thousand
unclassified documents remained to be addressed. To solve
this problem, we trained the statistical classifiers described
above in Section 5.
For each document in the collection
d
i
, these classifiers give p
i
, a k-vector of probabilities or distances
(depending on the classification method used), where
p
ik
quantifies the strength of association between the i
th
document and the k
th
class. All classifiers were trained on
the full text of each document, regardless of the representation
used to discover the initial clusters. The different
training sets were thus constructed simply by changing the
156
Table 4: Cross Validation Results for 3 Classifiers
Method
Av. Percent Accuracy
SE
Prind
59.07
1.07
Naive Bayes
75.57
0.4
SVM
75.08
0.68
class variable for each instance (document) to reflect its assigned
cluster under a given model.
To test the ability of each classifier to locate documents
correctly, we first performed a 10-fold cross validation on
the Editor's Desk documents. During cross-validation the
data are split randomly into n subsets (in this case n = 10).
The process proceeds by iteratively holding out each of the
n subsets as a test collection for a model trained on the
remaining n - 1 subsets. Cross validation is described in
[15]. Using this methodology, we compared the performance
of the three classification models described above. Table 4
gives the results from cross validation.
Although naive Bayes is not significantly more accurate
for these data than the SVM classifier, we limit the remainder
of our attention to analysis of its performance.
Our
selection of naive Bayes is due to the fact that it appears to
work comparably to the SVM approach for these data, while
being much simpler, both in theory and implementation.
Because we have only 1279 documents and 10 classes, the
number of training documents per class is relatively small.
In addition to models fitted to the Editor's Desk data, then,
we constructed a fourth model, supplementing the training
sets of each class by querying the Google search engine
7
and
applying naive Bayes to the augmented test set. For each
class, we created a query by submitting the three terms
with the highest log-odds ratio with that class. Further,
each query was limited to the domain www.bls.gov.
For
each class we retrieved up to 400 documents from Google
(the actual number varied depending on the size of the result
set returned by Google).
This led to a training set
of 4113 documents in the "augmented model," as we call
it below
8
. Cross validation suggested that the augmented
model decreased classification accuracy (accuracy= 58.16%,
with standard error= 0.32). As we discuss below, however,
augmenting the training set appeared to help generalization
during our second experiment.
The results of our cross validation experiment are encouraging
. However, the success of our classifiers on the Editor's
Desk documents that informed the cross validation study
may not be good predictors of the models' performance on
the remainder to the BLS website. To test the generality
of the naive Bayes classifier, we solicited input from 11 human
judges who were familiar with the BLS website. The
sample was chosen by convenience, and consisted of faculty
and graduate students who work on the GovStat project.
However, none of the reviewers had prior knowledge of the
outcome of the classification before their participation. For
the experiment, a random sample of 100 documents was
drawn from the entire BLS collection. On average each re-7
http://www.google.com
8
A more formal treatment of the combination of labeled and
unlabeled data is available in [4].
Table 5: Human-Model Agreement on 100 Sample
Docs.
Human Judge 1
st
Choice
Model
Model 1
st
Choice
Model 2
nd
Choice
N. Bayes (aug.)
14
24
N. Bayes
24
1
Human Judge 2
nd
Choice
Model
Model 1
st
Choice
Model 2
nd
Choice
N. Bayes (aug.)
14
21
N. Bayes
21
4
viewer classified 83 documents, placing each document into
as many of the categories shown in Table 3 as he or she saw
fit.
Results from this experiment suggest that room for improvement
remains with respect to generalizing to the whole
collection from the class models fitted to the Editor's Desk
documents. In Table 5, we see, for each classifier, the number
of documents for which it's first or second most probable
class was voted best or second best by the 11 human judges.
In the context of this experiment, we consider a first- or
second-place classification by the machine to be accurate
because the relation browser interface operates on a multi-way
classification, where each document is classified into
multiple categories. Thus a document with the "correct"
class as its second choice would still be easily available to
a user. Likewise, a correct classification on either the most
popular or second most popular category among the human
judges is considered correct in cases where a given document
was classified into multiple classes. There were 72 multi-class
documents in our sample, as seen in Figure 4. The
remaining 28 documents were assigned to 1 or 0 classes.
Under this rationale, The augmented naive Bayes classifier
correctly grouped 73 documents, while the smaller model
(not augmented by a Google search) correctly classified 50.
The resultant
2
test gave p = 0.001, suggesting that increasing
the training set improved the ability of the naive
Bayes model to generalize from the Editor's Desk documents
to the collection as a whole. However, the improvement afforded
by the augmented model comes at some cost. In particular
, the augmented model is significantly inferior to the
model trained solely on Editor's Desk documents if we concern
ourselves only with documents selected by the majority
of human reviewers--i.e. only first-choice classes. Limiting
the right answers to the left column of Table 5 gives p = 0.02
in favor of the non-augmented model. For the purposes of
applying the relation browser to complex digital library content
(where documents will be classified along multiple categories
), the augmented model is preferable. But this is not
necessarily the case in general.
It must also be said that 73% accuracy under a fairly
liberal test condition leaves room for improvement in our
assignment of topics to categories. We may begin to understand
the shortcomings of the described techniques by
consulting Figure 5, which shows the distribution of categories
across documents given by humans and by the augmented
naive Bayes model. The majority of reviewers put
157
Number of Human-Assigned Classes
Frequency
0
1
2
3
4
5
6
7
0
5
10
15
20
25
30
35
Figure 4:
Number of Classes Assigned to Documents
by Judges
documents into only three categories, jobs, benefits, and occupations
. On the other hand, the naive Bayes classifier distributed
classes more evenly across the topics. This behavior
suggests areas for future improvement. Most importantly,
we observed a strong correlation among the three most frequent
classes among the human judges (for instance, there
was 68% correlation between benefits and occupations). This
suggests that improving the clustering to produce topics
that were more nearly orthogonal might improve performance
CONCLUSIONS AND FUTURE WORK
Many developers and maintainers of digital libraries share
the basic problem pursued here. Given increasingly large,
complex bodies of data, how may we improve access to collections
without incurring extraordinary cost, and while also
keeping systems receptive to changes in content over time?
Data mining and machine learning methods hold a great deal
of promise with respect to this problem. Empirical methods
of knowledge discovery can aid in the organization and
retrieval of information. As we have argued in this paper,
these methods may also be brought to bear on the design
and implementation of advanced user interfaces.
This study explored a hybrid technique for aiding information
architects as they implement dynamic interfaces such
as the relation browser. Our approach combines unsupervised
learning techniques, applied to a focused subset of the
BLS website. The goal of this initial stage is to discover the
most basic and far-reaching topics in the collection. Based
jobs
prices
spending
costs
Human Classifications
0.00
0.15
jobs
prices
spending
costs
Machine Classifications
0.00
0.10
Figure 5: Distribution of Classes Across Documents
on a statistical model of these topics, the second phase of
our approach uses supervised learning (in particular, a naive
Bayes classifier, trained on individual words), to assign topical
relations to the remaining documents in the collection.
In the study reported here, this approach has demon-strated
promise. In its favor, our approach is highly scalable.
It also appears to give fairly good results. Comparing three
modes of document representation--full-text, title only, and
keyword--we found 98% accuracy as measured by collocation
of documents with identical subject headings. While it
is not surprising that editor-generated keywords should give
strong evidence for such learning, their superiority over full-text
and titles was dramatic, suggesting that even a small
amount of metadata can be very useful for data mining.
However, we also found evidence that learning topics from
a subset of the collection may lead to overfitted models.
After clustering 1279 Editor's Desk documents into 10 categories
, we fitted a 10-way naive Bayes classifier to categorize
the remaining 14,000 documents in the collection. While we
saw fairly good results (classification accuracy of 75% with
respect to a small sample of human judges), this experiment
forced us to reconsider the quality of the topics learned by
clustering. The high correlation among human judgments
in our sample suggests that the topics discovered by analysis
of the Editor's Desk were not independent. While we do
not desire mutually exclusive categories in our setting, we
do desire independence among the topics we model.
Overall, then, the techniques described here provide an
encouraging start to our work on acquiring subject metadata
for dynamic interfaces automatically. It also suggests
that a more sophisticated modeling approach might yield
158
better results in the future. In upcoming work we will experiment
with streamlining the two-phase technique described
here.
Instead of clustering documents to find topics and
then fitting a model to the learned clusters, our goal is to
expand the unsupervised portion of our analysis beyond a
narrow subset of the collection, such as The Editor's Desk.
In current work we have defined algorithms to identify documents
likely to help the topic discovery task. Supplied with
a more comprehensive training set, we hope to experiment
with model-based clustering, which combines the clustering
and classification processes into a single modeling procedure.
Topic discovery and document classification have long been
recognized as fundamental problems in information retrieval
and other forms of text mining. What is increasingly clear,
however, as digital libraries grow in scope and complexity,
is the applicability of these techniques to problems at the
front-end of systems such as information architecture and
interface design. Finally, then, in future work we will build
on the user studies undertaken by Marchionini and Brunk
in efforts to evaluate the utility of automatically populated
dynamic interfaces for the users of digital libraries.
REFERENCES
[1] A. Agresti. An Introduction to Categorical Data
Analysis. Wiley, New York, 1996.
[2] C. Ahlberg, C. Williamson, and B. Shneiderman.
Dynamic queries for information exploration: an
implementation and evaluation. In Proceedings of the
SIGCHI conference on Human factors in computing
systems, pages 619626, 1992.
[3] R. Baeza-Yates and B. Ribeiro-Neto. Modern
Information Retrieval. ACM Press, 1999.
[4] A. Blum and T. Mitchell. Combining labeled and
unlabeled data with co-training. In Proceedings of the
eleventh annual conference on Computational learning
theory, pages 92100. ACM Press, 1998.
[5] H. Chen and S. Dumais. Hierarchical classification of
web content. In Proceedings of the 23rd annual
international ACM SIGIR conference on Research and
development in information retrieval, pages 256263,
2000.
[6] M. Efron, G. Marchionini, and J. Zhang. Implications
of the recursive representation problem for automatic
concept identification in on-line governmental
information. In Proceedings of the ASIST Special
Interest Group on Classification Research (ASIST
SIG-CR), 2003.
[7] C. Fraley and A. E. Raftery. How many clusters?
which clustering method? answers via model-based
cluster analysis. The Computer Journal,
41(8):578588, 1998.
[8] A. K. Jain, M. N. Murty, and P. J. Flynn. Data
clustering: a review. ACM Computing Surveys,
31(3):264323, September 1999.
[9] T. Joachims. A probabilistic analysis of the Rocchio
algorithm with TFIDF for text categorization. In
D. H. Fisher, editor, Proceedings of ICML-97, 14th
International Conference on Machine Learning, pages
143151, Nashville, US, 1997. Morgan Kaufmann
Publishers, San Francisco, US.
[10] T. Joachims. Text categorization with support vector
machines: learning with many relevant features. In
C. N
edellec and C. Rouveirol, editors, Proceedings of
ECML-98, 10th European Conference on Machine
Learning, pages 137142, Chemnitz, DE, 1998.
Springer Verlag, Heidelberg, DE.
[11] I. T. Jolliffe. Principal Component Analysis. Springer,
2nd edition, 2002.
[12] L. Kaufman and P. J. Rosseeuw. Finding Groups in
Data: an Introduction to Cluster Analysis. Wiley,
1990.
[13] G. Marchionini and B. Brunk. Toward a general
relation browser: a GUI for information architects.
Journal of Digital Information, 4(1), 2003.
http://jodi.ecs.soton.ac.uk/Articles/v04/i01/Marchionini/.
[14] A. K. McCallum. Bow: A toolkit for statistical
language modeling, text retrieval, classification and
clustering. http://www.cs.cmu.edu/~mccallum/bow,
1996.
[15] T. Mitchell. Machine Learning. McGraw Hill, 1997.
[16] E. Rasmussen. Clustering algorithms. In W. B. Frakes
and R. Baeza-Yates, editors, Information Retrieval:
Data Structures and Algorithms, pages 419442.
Prentice Hall, 1992.
[17] R. Tibshirani, G. Walther, and T. Hastie. Estimating
the number of clusters in a dataset via the gap
statistic, 2000.
http://citeseer.nj.nec.com/tibshirani00estimating.html.
[18] V. N. Vapnik. The Nature of Statistical Learning
Theory. Springer, 2000.
159
| information architecture;Information Architecture;BLS;digital libraries;document classification;Machine Learning;topic discovery;document representation;Interface Design;clustering;relational browser |
133 | Machine Learning in DNA Microarray Analysis for Cancer Classification | The development of microarray technology has supplied a large volume of data to many fields. In particular, it has been applied to prediction and diagnosis of cancer, so that it expectedly helps us to exactly predict and diagnose cancer. To precisely classify cancer we have to select genes related to cancer because extracted genes from microarray have many noises. In this paper, we attempt to explore many features and classifiers using three benchmark datasets to systematically evaluate the performances of the feature selection methods and machine learning classifiers. Three benchmark datasets are Leukemia cancer dataset, Colon cancer dataset and Lymphoma cancer data set. Pearson's and Spearman's correlation coefficients, Euclidean distance, cosine coefficient, information gain, mutual information and signal to noise ratio have been used for feature selection. Multi-layer perceptron, k-nearest neighbour, support vector machine and structure adaptive selforganizing map have been used for classification. Also, we have combined the classifiers to improve the performance of classification. Experimental results show that the ensemble with several basis classifiers produces the best recognition rate on the benchmark dataset. | Introduction
The need to study whole genome such as Human
Genomic Project (HGP) is recently increasing because
fragmentary knowledge about life phenomenon with
complex control functions of molecular-level is limited.
DNA chips have been developed during that process
because understanding the functions of genome
sequences is essential at that time.
The development of DNA microarray technology has
been produced large amount of gene data and has made it
easy to monitor the expression patterns of thousands of
genes simultaneously under particular experimental
environments and conditions (Harrington et al. 2000).
Also, we can analyze the gene information very rapidly
and precisely by managing them at one time (Eisen et al.
1999).
Microarray technology has been applied to the field of
accurate prediction and diagnosis of cancer and expected
that it would help them. Especially accurate classification
of cancer is very important issue for treatment of cancer.
Many researchers have been studying many problems of
cancer classification using gene expression profile data
and attempting to propose the optimal classification
technique to work out these problems (Dudoit et al. 2000,
Ben-Dor et al. 2000) as shown in Table . Some
produce better results than others, but there have been
still no comprehensive work to compare the possible
feature selection methods and classifiers. We need a
thorough effort to give the evaluation of the possible
methods to solve the problems of analyzing gene
expression data.
The gene expression data usually consist of huge number
of genes, and the necessity of tools analysing them to get
useful information gets radical. There is research that
systematically analyzes the results of test using a variety
of feature selection methods and classifiers for selecting
informative genes to help classification of cancer and
classifying cancer (Ryu et al. 2002). However, the results
were not verified enough because only one benchmark
dataset was used. Due to the reason, it is necessary to
analyse systematically the performance of classifiers
using a variety of benchmark datasets.
In this paper, we attempt to explore many features and
classifiers that precisely classify cancer using three
recently published benchmark dataset. We adopted seven
feature selection methods and four classifiers, which are
commonly used in the field of data mining and pattern
recognition. Feature selection methods include Pearson's
and Spearman's correlation coefficients, Euclidean
distance, cosine coefficient, information gain, mutual
information and signal to noise ratio. Also, classification
methods include multi-layer perceptron (MLP), k-nearest
neighbour (KNN), support vector machine (SVM) and
structure adaptive self organizing map (SOM). We also
attempt to combine some of the classifiers with majority
voting to improve the performance of classification.
Backgrounds
DNA arrays consist of a large number of DNA molecules
spotted in a systemic order on a solid substrate.
Depending on the size of each DNA spot on the array,
DNA arrays can be categorized as microarrays when the
diameter of DNA spot is less than 250 microns, and
macroarrays when the diameter is bigger than 300
microns. The arrays with the small solid substrate are also
referred to as DNA chips. It is so powerful that we can
investigate the gene information in short time, because at
least hundreds of genes can be put on the DNA
microarray to be analyzed.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Sam ple
Ge
ne
s
ALL
AML
1000
2000
3000
4000
5000
6000
7129
Gene expression data
DNA microarray
Image scanner
Fig. 1. General process of acquiring the gene expression
data from DNA microarray
DNA microarrays are composed of thousands of
individual DNA sequences printed in a high density array
on a glass microscope slide using a robotic arrayer as
shown in Fig. 1. The relative abundance of these spotted
DNA sequences in two DNA or RNA samples may be
assessed by monitoring the differential hybridization of
the two samples to the sequences on the array. For
mRNA samples, the two samples are reverse-transcribed
into cDNA, labeled using different fluorescent dyes
mixed (red-fluorescent dye Cy5 and green-fluorescent
dye Cy3). After the hybridization of these samples with
the arrayed DNA probes, the slides are imaged using
scanner that makes fluorescence measurements for each
dye. The log ratio between the two intensities of each dye
is used as the gene expression data (Lashkari et al. 1997,
Derisi et al. 1997, Eisen et al. 1998).
)
3
Cy
(
Int
)
5
Cy
(
Int
log
_
2
=
expression
gene
(1)
where Int(Cy5) and Int(Cy3) are the intensities of red
and green colors. Since at least hundreds of genes are put
on the DNA microarray, it is so helpful that we can
Table . Relevant works on cancer classification
Method
Authors
Dataset
Feature
Classifier
Accuracy
[%]
Leukemia
94.1
Furey et al.
Colon
Signal to noise ratio
SVM
90.3
Li et al. 2000
Leukemia
Model selection with Akaike information criterion and Bayesian information
criterion with logistic regression
94.1
Lymphoma
84.6~
Li et al. 2001
Colon
Genetic Algorithm
KNN
94.1~
Leukemia
91.6
Colon
Nearest neighbor
80.6
Leukemia
94.4
Colon
SVM with quadratic kernel
74.2
Leukemia
95.8
Ben-Dor et al.
Colon
All genes, TNoM score
AdaBoost
72.6
Leukemia
95.0~
Lymphoma
Nearest neighbor
95.0~
Leukemia
95.0~
Lymphoma
Diagonal linear discriminant analysis
95.0~
Leukemia
95.0~
Dudoit et al.
Lymphoma
The ratio of between-groups to
within-groups sum of squares
BoostCART
90.0~
Leukemia
94.2
Lymphoma
98.1
Colon
Logistic discriminant
87.1
Leukemia
95.4
Lymphoma
97.6
Colon
Principal component analysis
Quadratic discriminant analysis
87.1
Leukemia
95.9
Lymphoma
96.9
Colon
Logistic discriminant
93.5
Leukemia
96.4
Lymphoma
97.4
Nguyen et al.
Colon
Partial least square
Quadratic discriminant analysis
91.9
investigate the genome-wide information in short time.
2.2 Related Works
It is essential to efficiently analyze DNA microarray data
because the amount of DNA microarray data is usually
very large. The analysis of DNA microarray data is
divided into four branches: clustering, classification, gene
identification, and gene regulatory network modeling.
Many machine learning and data mining methods have
been applied to solve them.
Information theory (Fuhrman et al. 2000) has been
applied to gene identification problem. Also, boolean
network (Thieffry et al. 1998), Bayesian network
(Friedman et al. 2000), and reverse engineering method
(Arkin et al. 1997) have been applied to gene regulatory
network modeling problem.
Several machine learning techniques have been
previously used in classifying gene expression data,
including Fisher linear discriminant analysis (Dudoit et al.
2000), k nearest neighbour (Li et al. 2001), decision tree,
multi-layer perceptron (Khan et al. 2001, Xu et al. 2002),
support vector machine (Furey et al. 2000, Brown et al.
2000), boosting, and self-organizing map (Golub et al.
1999). Also, many machine learning techniques were
have been used in clustering gene expression data
(Shamir 2001). They include hierarchical clustering
(Eisen et al. 1998), self-organizing map (Tamayo et al.
1999), and graph theoretic approaches (Hartuv et al. 2000,
Ben-Dor et al. 1999, Sharan et al. 2000)
The first approach, classification method, is called
supervised method while the second approach, clustering
method, is called unsupervised method. Clustering
methods do not use any tissue annotation (e.g., tumor vs.
normal) in the partitioning step. In contrast, classification
methods attempt to predict the classification of new
tissues, based on their gene expression profiles after
training on examples (training data) that have been
classified by an external "supervision" (Ben-Dor et al.
2000). Table shows relevant works on cancer
classification.
Machine Learning for DNA Microarray
We define machine learning for DNA microarray that
selects discriminative genes related with classification
from gene expression data, trains classifier and then
classifies new data using learned classifier. The system is
as shown in Fig. 2. After acquiring the gene expression
data calculated from the DNA microarray, our prediction
system has 2 stages: feature selection and pattern
classification stages.
The feature selection can be thought of as the gene
selection, which is to get the list of genes that might be
informative for the prediction by statistical, information
theoretical methods, etc. Since it is highly unlikely that
all the 7,129 genes have the information related to the
cancer and using all the genes results in too big
dimensionality, it is necessary to explore the efficient
way to get the best feature. We have extracted 25 genes
using seven methods described in Section 3.1, and the
cancer predictor classifies the category only with these
genes.
Given the gene list, a classifier makes decision to which
category the gene pattern belongs at prediction stage. We
have adopted four most widely used classification
methods and an ensemble classifier as shown in Fig. 2.
Feature selection
Tumor
Normal
Cancer predictor
Pearson's correlation coefficient
Spearman's correlation coefficient
Euclidean distance
Cosine coefficient
Information gain
Mutual information
Signal to noise ratio
3-layered MLP with backpropagation
k-nearest neighbor
Support vector machine
Structure adaptive self-organizing map
Ensemble classifier
Microarray
Expression data
Fig. 2. Cancer classification system
3.1 Gene Selection
Among thousands of genes whose expression levels are
measured, not all are needed for classification.
Microarray data consist of large number of genes in small
samples. We need to select some genes highly related
with particular classes for classification, which is called
informative genes (Golub et al. 1999). This process is
referred to as gene selection. It is also called feature
selection in machine learning.
Using the statistical correlation analysis, we can see the
linear relationship and the direction of relation between
two variables. Correlation coefficient r varies from 1 to
+1, so that the data distributed near the line biased to (+)
direction will have positive coefficients, and the data near
the line biased to (-) direction will have negative
coefficients.
Suppose that we have a gene expression pattern g
i
(i = 1 ~
7,129 in Leukemia data, i = 1 ~ 2,000 in Colon data, i = 1
~ 4,026 in Lymphoma data)
.
Each g
i
is a vector of gene
expression levels from N samples, g
i
= (e
1
, e
2
, e
3
, ..., e
N
).
The first M elements (e
1
, e
2
, ..., e
M
) are examples of
tumor samples, and the other N-M (e
M+1
, e
M+2
, ..., e
N
) are
those from normal samples. An ideal gene pattern that
belongs to tumor class is defined by g
ideal_tumor
= (1, 1, ...,
1, 0, ..., 0), so that all the elements from tumor samples
are 1 and the others are 0. In this paper, we have
calculated the correlation coefficient between this g
ideal
and the expression pattern of each gene. When we have
two vectors X and Y that contain N elements, r
Pearson
and
r
Spearman
are calculated as follows:
(
)
( )
=
N
Y
Y
N
X
X
N
Y
X
XY
r
Pearson
2
2
2
2
(2)
(
)
(
)
1
6
1
2
2
=
N
N
D
D
r
y
x
Spearman
(3)
where, D
x
and D
y
are the rank matrices of X and Y,
respectively.
The similarity between two input vectors X and Y can be
thought of as distance. Distance is a measure on how far
the two vectors are located, and the distance between
g
ideal_tumor
and g
i
tells us how much the g
i
is likely to the
tumor class. Calculating the distance between them, if it
is bigger than certain threshold, the gene g
i
would belong
to tumor class, otherwise g
i
belongs to normal class. In
this paper, we have adopted Euclidean distance (r
Eclidean
)
and cosine coefficient (r
Cosine
) represented by the
following equations:
(
)
=
2
Y
X
r
Eclidean
(4)
=
2
2
Y
X
XY
r
Cosine
(5)
We have utilized the information gain and mutual
information that are widely used in many fields such as
text categorization and data mining. If we count the
number of genes excited (
)
(
i
g
P
) or not excited (
)
(
_
i
g
P
)
in category c
j
(
)
(
j
c
P
), the coefficients of the information
gain and mutual information become as follows:
)
(
)
(
)
,
(
log
)
,
(
)
(
)
(
)
,
(
log
)
,
(
)
,
(
i
j
j
i
i
i
i
j
j
i
j
i
j
i
g
P
c
P
c
g
P
c
g
P
g
P
c
P
c
g
P
c
g
P
c
g
IG
+
=
(6)
)
(
)
(
)
,
(
log
)
,
(
i
j
j
i
j
i
g
P
c
P
c
g
P
c
g
MI
=
(7)
Mutual information tells us the dependency relationship
between two probabilistic variables of events. If two
events are completely independent, the mutual
information is 0. The more they are related, the higher the
mutual information gets. Information gain is used when
the features of samples are extracted by inducing the
relationship between gene and class by the presence
frequency of the gene in the sample. Information gain
measures the goodness of gene using the presence and
absence within the corresponding class.
For each gene g
i
, some are from tumor samples, and some
are from normal samples. If we calculate the mean
and standard deviation
from the distribution of gene
expressions within their classes, the signal to noise ratio
of gene g
i
, SN(g
i
), is defined by:
)
(
)
(
)
(
)
(
)
(
i
normal
i
tumor
i
normal
i
tumor
i
g
g
g
g
g
SN
=
(8)
3.2 Classification
Many algorithms designed for solving classification
problems in machine learning have been applied to recent
research of prediction and classification of cancer with
gene expression data. General process of classification in
machine learning is to train classifier to accurately
recognize patterns from given training samples and to
classify test samples with the trained classifier.
Representative classification algorithms such as
multi-layer perceptron, k-nearest neighbour, support
vector machine, and structure-adaptive self-organizing
map are applied to the classification.
1) MLP
Error backpropagation neural network is a feed-forward
multilayer perceptron (MLP) that is applied in many
fields due to its powerful and stable learning algorithm
(Lippman et al. 1987). The neural network learns the
training examples by adjusting the synaptic weight of
neurons according to the error occurred on the output
layer. The power of the backpropagation algorithm lies in
two main aspects: local for updating the synaptic weights
and biases, and efficient for computing all the partial
derivatives of the cost function with respect to these free
parameters (Beale 1996). The weight-update rule in
backpropagation algorithm is defined as follows:
)
1
(
)
(
+
=
n
w
x
n
w
ji
ji
j
ji
(9)
where
)
(n
w
ji
is the weight update performed during
the nth iteration through the main loop of the algorithm,
is a positive constant called the learning rate,
j
is
the error term associated with j, x
ji
is the input from node
i to unit j, and 0
<1 is a constant called the
momentum.
2) KNN
k-nearest neighbor (KNN) is one of the most common
methods among memory based induction. Given an input
vector, KNN extracts k closest vectors in the reference set
based on similarity measures, and makes decision for the
label of input vector using the labels of the k nearest
neighbors.
Pearson's coefficient correlation and Euclidean distance
have been used as the similarity measure. When we have
an input X and a reference set D = {d
1
, d
2
, ..., d
N
}, the
probability that X may belong to class c
j
, P(X, c
j
) is
defined as follows:
j
j
i
kNN
d
i
j
b
c
d
P
d
X
c
X
P
i
=
)
,
(
)
,
Sim(
)
,
(
(10)
where Sim(X, d
i
) is the similarity between X and d
i
and b
j
is a bias term.
3) SASOM
Self-organizing map (SOM) defines a mapping from the
input space onto an output layer by unsupervised learning
algorithm. SOM has an output layer consisting of N nodes,
each of which represents a vector that has the same
dimension as the input pattern. For a given input vector X,
the winner node m
c
is chosen using Euclidean distance
between x and its neighbors, m
i
.
i
i
c
m
x
m
x
=
min
(11)
)}
(
)
(
{
)
(
)
(
)
(
)
1
(
t
m
t
x
t
n
t
t
m
t
m
i
ci
i
i
+
=
+
(12)
Even though SOM is well known for its good
performance of topology preserving, it is difficult to
apply it to practical classification since the topology
should be fixed before training. A structure adaptive
self-organizing map (SASOM) is proposed to overcome
this shortcoming (Kim et al. 2000). SASOM starts with
44 map, and dynamically splits the output nodes of the
map, where the data from different classes are mixed,
trained with the LVQ learning algorithm. Fig. 3 illustrates
the algorithm of SASOM.
Input data
No
Yes
Map generated
Initialize map as 4X4
Train with Kohonen's algorithm
Find nodes whose hit_ratio is less than 95.0%
Split the nodes to 2X2 submap
Train the split nodes with LVQ algorithm
Remove nodes not participted in learning
Stop condition satisfied?
Structure adaptation
Fig. 3. Overview of SASOM
4) SVM
Support vector machine (SVM) estimates the function
classifying the data into two classes (Vapnik 1995,
Moghaddam et al. 2000). SVM builds up a hyperplane as
the decision surface in such a way to maximize the
margin of separation between positive and negative
examples. SVM achieves this by the structural risk
minimization principle that the error rate of a learning
machine on the test data is bounded by the sum of the
training-error rate and a term that depends on the
Vapnik-Chervonenkis (VC) dimension. Given a labeled
set of M training samples (X
i
, Y
i
), where X
i
R
N
and Y
i
is
the associated label, Y
i
{-1, 1}, the discriminant
hyperplane is defined by:
=
+
=
M
i
i
i
i
b
X
X
k
Y
X
f
1
)
,
(
)
(
(13)
where k( .) is a kernel function and the sign of f(X)
determines the membership of X. Constructing an optimal
hyperplane is equivalent to finding all the nonzero
i
(support vectors) and a bias b. We have used SVM
light
module and SVM
RBF
in this paper.
5) Ensemble classifier
Classification can be defined as the process to
approximate I/O mapping from the given observation to
the optimal solution. Generally, classification tasks
consist of two parts: feature selection and classification.
Feature selection is a transformation process of
observations to obtain the best pathway to get to the
optimal solution. Therefore, considering multiple features
encourages obtaining various candidate solutions, so that
we can estimate more accurate solution to the optimal
than any other local optima.
When we have multiple features available, it is important
to know which of features should be used. Theoretically,
as many features we may concern, it may be more
effective for the classifier to solve the problems. But
features that have overlapped feature spaces may cause
the redundancy of irrelevant information and result in the
counter effect such as overfitting. Therefore, it is more
important to explore and utilize independent features to
train classifiers, rather than increase the number of
features we use. Correlation between feature sets can be
induced from the distribution of feature numbers, or using
mathematical analysis using statistics.
Meanwhile, there are many algorithms for the
classification from machine learning approach, but none
of them is perfect. However, it is always difficult to
decide what to use and how to set up its parameters.
According to the environments the classifier is embedded,
some algorithm works well and others not. It is because,
depending on the algorithms, features and parameters
used, the classifier searches in different solution space.
These sets of classifiers produce their own outputs, and
enable the ensemble classifier to explore more wide
solution space.
We have applied this idea to a classification framework
as shown in Fig. 4. If there are k features and n classifiers,
there are kn feature-classifier combinations. There are
kn
C
m
possible ensemble classifiers when m
feature-classifier combinations are selected for ensemble
classifier. Then classifiers are trained using the features
selected, finally a majority voting is accompanied to
combine the outputs of these classifiers. After classifiers
with some features are trained independently produce
their own outputs, final answer will be judged by a
combining module, where the majority voting method is
adopted.
Class
i
Class
1
Feature
extractor
Majority
Voting
Input
pattern
...
...
Classifier
n1
Classifier
nk
...
Classifier
a1
Classifier
ak
...
Classifier
a2
Classifier
n2
Feature 1
Feature k
Feature 2
...
...
...
..
.
Fig 4. Overview of the ensemble classifier
Experimental Results
There are several microarray datasets from published
cancer gene expression studies, including leukemia
cancer dataset, colon cancer dataset, lymphoma dataset,
breast cancer dataset, NCI60 dataset, and ovarian cancer
dataset. Among them three datasets are used in this paper.
The first dataset and third dataset involve samples from
two variants of the same disease and second dataset
involves tumor and normal samples of the same tissue.
Because the benchmark data have been studied in many
papers, we can compare the results of this paper with
others.
1) Leukemia cancer dataset
Leukemia dataset consists of 72 samples: 25 samples of
acute myeloid leukemia (AML) and 47 samples of acute
lymphoblastic leukemia (ALL). The source of the gene
expression measurements was taken form 63 bone
marrow samples and 9 peripheral blood samples. Gene
expression levels in these 72 samples were measured
using high density oligonucleotide microarrays (Ben-Dor
et al. 2000).
38 out of 72 samples were used as training data and the
remaining were used as test data in this paper. Each
sample contains 7129 gene expression levels.
2) Colon cancer dataset
Colon dataset consists of 62 samples of colon epithelial
cells taken from colon-cancer patients. Each sample
contains 2000 gene expression levels. Although original
data consists of 6000 gene expression levels, 4000 out of
6000 were removed based on the confidence in the
measured expression levels. 40 of 62 samples are colon
cancer samples and the remaining are normal samples.
Each sample was taken from tumors and normal healthy
parts of the colons of the same patients and measured
using high density oligonucleotide arrays (Ben-Dor et al.
2000).
31 out of 62 samples were used as training data and the
remaining were used as test data in this paper.
3) Lymphoma cancer dataset
B cell diffuse large cell lymphoma (B-DLCL) is a
heterogeneous group of tumors, based on significant
variations in morphology, clinical presentation, and
response to treatment. Gene expression profiling has
revealed two distinct tumor subtypes of B-DLCL:
germinal center B cell-like DLCL and activated B
cell-like DLCL (Lossos et al. 2000). Lymphoma dataset
consists of 24 samples of GC B-like and 23 samples of
activated B-like.
22 out of 47 samples were used as training data and the
remaining were used as test data in this paper.
4.2 Environments
For feature selection, each gene is scored based on the
feature selection methods described in Section 3.1, and
the 25 top-ranked genes are chosen as the feature of the
input pattern.
For classification, we have used 3-layered MLP with
5~15 hidden nodes, 2 output nodes, 0.01~0.50 of learning
rate and 0.9 of momentum. KNN has been used with
k=1~8. Similarity measures used in KNN are Pearson's
correlation coefficient and Euclidean distance. SASOM
has been used by 44 map with rectangular topology,
0.05 of initial learning rate, 1000 of initial learning length,
10 of initial radius, 0.02 of final learning rate, 10000 of
final learning length and 3 of final radius. We have used
SVM with linear function and RBF function as kernel
function. In RBF, we have changed 0.1~0.5 gamma
variable.
4.3 Analysis of results
Table shows the IDs of genes overlapped by
Pearson's correlation coefficient, cosine coefficient,
Euclidean distance in each dataset. Among these genes
there are some genes overlapped by other feature
selection methods. For example, gene 2288 of leukemia
has been third-ranked in information gain. The number of
overlapped genes of leukemia dataset is 17. The number
of overlapped genes of colon dataset is 9. The number of
overlapped genes of lymphoma dataset is 19. These
overlapped genes are very informative. In particular,
Zyxin, gene 4847 of leukemia, has been reported as
informative (Golub et al. 1999), but there are no genes
appeared commonly in every method.
Table . The IDs of genes overlapped by Pearson's
correlation coefficient, cosine coefficient, and Euclidean
distance
461 1249 1745 1834 2020
2043 2242 2288 3258 3320
4196 4847 5039 6200 6201
Leukemia
6373 6803
187
619
704
767 1060
Colon
1208 1546 1771 1772
36 75 76 77 86
86 678 680 1636
1637
2225 2243 2263 2412 2417
Lymphoma
2467 3890 3893 3934
Fig. 5 shows the expression level of genes chosen by
Pearson's correlation coefficient method in Leukemia
dataset. 1~27 samples are ALL and 28~38 samples are
AML. The differences of brightness between AML and
ALL represent that genes chosen by Pearson's correlation
coefficient method divide samples into AML and ALL.
The results of recognition rate on the test data are as
shown in Tables , , and . Column is the list of
feature selection methods: Pearson's correlation
coefficient (PC), Spearman's correlation coefficient (SC),
Euclidean distance (ED), cosine coefficient (CC),
information gain (IG), mutual information (MI), and
signal to noise ratio (SN). KNN
Pearson
and MLP seem to
produce the best recognition rate among the classifiers on
the average. KNN
Pearson
is better than KNN
cosine
. SVM is
poorer than any other classifiers.
0
0.2
0.4
0.6
0.8
1
Sample
Ge
n
e
27
38
5
10
15
20
25
Fig. 5. Expression level of genes chosen by
r
Pearson
in
Leukemia dataset
Table . Recognition rate with features and classifiers
(%) in Leukemia dataset
SVM KNN
MLP
SASOM
Linear
RBF Cosine
Pearson
PC 97.1 76.5 79.4 79.4 97.1 94.1
SC 82.4 61.8 58.8 58.8 76.5 82.4
ED 91.2 73.5 70.6 70.6 85.3 82.4
CC 94.1 88.2 85.3 85.3 91.2 94.1
IG 97.1 91.2 97.1 97.1 94.1 97.1
MI 58.8 58.8 58.8 58.8 73.5 73.5
SN 76.5 67.7 58.8 58.8 73.5 73.5
Mean 85.3 74.0 72.7 72.7 84.5 85.3
Table . Recognition rate with features and classifiers
(%) in Colon dataset
SVM KNN
MLP
SASOM
Linear
RBF Cosine
Pearson
PC 74.2 74.2 64.5 64.5 71.0 77.4
SC 58.1 45.2 64.5 64.5 61.3 67.7
ED 67.8 67.6 64.5 64.5 83.9 83.9
CC 83.9 64.5 64.5 64.5 80.7 80.7
IG 71.0 71.0 71.0 71.0 74.2 80.7
MI 71.0 71.0 71.0 71.0 74.2 80.7
SN 64.5 45.2 64.5 64.5 64.5 71.0
Mean 70.1 62.7 66.4 66.4 72.7 77.4
Table . Recognition rate with features and classifiers
(%) in Lymphoma dataset
SVM KNN
MLP
SASOM
Linear
RBF Cosine
Pearson
PC 64.0 48.0 56.0 60.0 60.0 76.0
SC 60.0 68.0 44.0 44.0 60.0 60.0
ED 56.0 52.0 56.0 56.0 56.0 68.0
CC 68.0 52.0 56.0 56.0 60.0 72.0
IG 92.0 84.0 92.0 92.0 92.0 92.0
MI 72.0 64.0 64.0 64.0 80.0 64.0
SN 76.0 76.0 72.0 76.0 76.0 80.0
Mean 69.7 63.4 62.9 63.4 69.1 73.1
Fig. 6 shows the comparison of the average performance
of features. Although the results are different between
datasets, information gain is the best, and Pearson's
correlation coefficient is the second. Mutual information
and Spearman's correlation coefficient are poor. The
difference of performance in datasets might be caused by
the characteristics of data.
0
20
40
60
80
100
Leukemia
Colon
Ly mphoma
R
e
c
o
g
n
it
io
n
ra
t
e
[
%
]
PC
SC
ED
CC
IG
MI
SN
Fig. 6. Average performance of feature selection methods
Recognition rates by ensemble classifiers are shown in
Table . Majority voting-3 means the ensemble
classifier using majority voting with 3 classifiers, and
majority voting-all means the ensemble classifier using
majority voting with all 42 feature-classifier
combinations. Fig. 7 shows the comparison of the
performance of the best classifier of all possible
42
C
3
ensemble classifiers, ensemble classifier-3 and ensemble
classifier-all. The best result of Leukemia is obtained by
all classifier except SASOM. The result of the best
classifier is the same as that of the best ensemble
classifier using majority voting with 3 classifiers. In other
datasets, the performance of ensemble classifier surpasses
the best classifier. In all datasets, ensemble classifier
using majority voting with all classifiers are the worst.
Table . Recognition rate by ensemble classifier
Majority voting-3 Majority voting-all
Leukemia
97.1 91.2
Colon 93.6
71.0
Lymphoma
96.0 80.0
60
70
80
90
100
Leukemia
Colon
Lymphoma
R
ecognit
ion rat
e
[
%]
The best classifier
Majority voting-3
Majority voting-all
Fig. 7. Comparison of the performance of the best
classifier, the best ensemble classifier-3, and ensemble
classifier-all
Table . Classifiers of the best ensemble classifier of all
possible
42
C
3
ensemble classifiers in Colon dataset
Classifier
Feature selection method
MLP
KNN
cosine
KNN
cosine
Cosine coefficient
Euclidean distance
Pearson's correlation coefficient
MLP
KNN
cosine
KNN
Pearson
Cosine coefficient
Euclidean distance
Pearson's correlation coefficient
MLP
KNN
cosine
SASOM
Cosine coefficient
Euclidean distance
Pearson's correlation coefficient
MLP
KNN
cosine
KNN
pearson
Mutual information
Euclidean distance
Pearson's correlation coefficient
MLP
KNN
cosine
KNN
pearson
Information gain
Euclidean distance
Pearson's correlation coefficient
MLP
MLP
KNN
pearson
Cosine coefficient
Pearson's correlation coefficient
Euclidean distance
KNN
pearson
KNN
pearson
SASOM
Euclidean distance
Mutual information
Pearson's correlation coefficient
KNN
pearson
KNN
pearson
SASOM
Euclidean distance
Information gain
Pearson's correlation coefficient
Table shows the classifiers of the best ensemble
classifier of all possible
42
C
3
ensemble classifiers in
Colon dataset where its recognition rate is 93.6%. If we
observe the classifiers of the best ensemble classifier in
Fig. 10, we find features more important to affect the
result than classifiers. In other words, in ensemble
classifiers there must be classifiers with Euclidean
distance and Pearson's correlation coefficient. The other
classifier is the one with cosine coefficient, mutual
information or information gain.
This fact is also prominent in Lymphoma dataset. Most of
the classifiers of the best ensemble classifiers are
classifiers with information gain, signal to noise ratio and
Euclidean distance, or the classifiers with information
gain, signal to noise ratio and Pearson's correlation
coefficient.
As shown in Fig. 8~11, Euclidean distance, Pearson's
correlation coefficient and cosine coefficient are highly
correlated in Colon dataset. Table shows genes ranked
by Euclidean distance, Pearson's correlation coefficient
and cosine coefficient and the value of genes by each
method. The bold faced figures mean the overlapped
genes of those features. There are some overlapped genes
among them as shown in Table . This indicates
overlapped genes of highly correlated features can
discriminate classes and the other genes not overlapped
among combined features can supplement to search the
solution spaces. For example, gene 1659 and gene 550
are high-ranked in both of Pearson's correlation
coefficient and cosine coefficient, and gene 440 is
high-ranked in both of Euclidean distance and cosine
coefficient. This subset of two features might paly an
important role in classification.
This paper shows that the ensemble classifier works and
we can improve the classification performance by
combining complementary common sets of classifiers
learned from three independent features, even when we
use simple combination method like majority voting.
Fig. 8. Correlation of Euclidean distance and Pearson's
correlation coefficient in Colon dataset
Fig. 9. Correlation of Euclidean distance and cosine
coefficient in Colon dataset
Fig. 10. Correlation of Pearson's correlation coefficient
and cosine coefficient in Colon dataset
Fig. 11. Correlation of Euclidean distance, Pearson's
correlation coefficient and cosine coefficient in Colon
dataset
Table . Genes ranked by Euclidean distance, Pearson's
correlation coefficient and cosine coefficient
Rank Euclidean
Pearson
Cosine
1 619(2.262385)
2 767(2.335303)
3 704(2.374358)
4 187(2.388404)
5 207(2.410640)
6 887(2.473033)
7 635(2.474971)
8 1915(2.498611)
9 1046(2.506833)
10 1208(2.512257)
11 482(2.520699)
12 1771(2.525080)
13 1993(2.529032)
14 62(2.546894)
15 1772(2.547455)
16 1194(2.549244)
17 1594(2.551892)
18 199(2.557360)
19 1867(2.587469)
20 959(2.589989)
21 440(2.593881)
22 480(2.594514)
23 1546(2.604907)
24 399(2.613609)
25 1060(2.614100)
619(0.681038)
1771(0.664378)
1659(0.634084)
550(0.631655)
187(0.626262)
1772(0.621581)
1730( 0.615566)
1648(0.614949)
365(0.614591)
1208(0.603313)
1042(0.602160)
1060(0.601712)
513(0.596444)
767(0.594119)
1263(0.591725)
138(0.587851)
1826(0.584774)
1546(0.582293)
141(0.579073)
1227(0.574537)
704(0.569022)
1549(0.562828)
1489(0.561003)
1724(0.559919)
1209(0.559778)
619(0.895971)
1772(0.875472)
767(0.874914)
1771(0.873892)
1659(0.870115)
187(0.867285)
704(0.866679)
1208(0.866029)
550(0.864547)
1546(0.856904)
251(0.855841)
1915(0.855784)
440(0.855453)
1263(0.854854)
1060(0.854829)
965(0.854137)
1648(0.854119)
1942(0.853586)
513(0.852270)
1042(0.851993)
1993(0.851753)
365(0.851205)
1400(0.849531)
207(0.849084)
271(0.848481)
Concluding Remarks
We have conducted a thorough quantitative comparison
among the 42 combinations of features and classifiers for
three benchmark dataset. Information gain and Pearson's
correlation coefficient are the top feature selection
methods, and MLP and KNN are the best classifiers. The
experimental results also imply some correlations
between features and classifiers, which might guide the
researchers to choose or devise the best classification
method for their problems in bioinformatics. Based on the
results, we have developed the optimal feature-classifier
combination to produce the best performance on the
classification.
We have combined 3 classifiers among 42 classifiers
using majority voting. We could confirm that ensemble
classifier of highly correlated features works better than
ensemble of uncorrelated features. In particular, we
analyzed the improvement of the classification
performance for Colon dataset.
Moreover, our method of combining classifiers is very
simple, and there are many methods of combining
classifiers in machine learning and data mining fields. We
will have to apply more sophisticated methods of
combining classifiers to the same dataset to confirm the
results obtained and get better results.
Acknowledgements
This paper was supported by Brain Science and
Engineering Research Program sponsored by Korean
Ministry of Science and Technology.
References
Alon, U., Barkai, N., et al. (1999): Broad patterns of gene
expression revealed by clustering analysis of tumor and
normal colon tissues probed by oligonucleotide arrays.
Proc. of the Natl. Acad. of Sci. USA, 96:6745-6750.
Arkin, A., Shen, P. and Ross, J. (1997): A test case of
correlation metric construction of a reaction pathway
from measurements. Science, 277:1275-1279.
Beale, H. D. (1996): Neural Network Design. 1-47, PWS
Publish Company.
Ben-Dor, A., Shamir, R. and Yakhini, Z. (1999):
Clustering gene expression patterns. Journal of
Computational Biology, 6:281-297.
Ben-Dor, A., Bruhn, L., Friedman, N., Nachman, I.,
Schummer, M. and Yakhini, N. (2000): Tissue
classification with gene expression profiles. Journal of
Computational Biology, 7:559-584.
Brown, M. P. S., Grundy, W. N., Lin, D., Cristianini, N.,
Sugnet, C. W., Furey, T. S., Ares, M. Jr. and Haussler,
D. (2000): Knowledge-based analysis of microarray
gene expression data by using support vector machines.
Proc. of the Natl. Acad. of Sci. USA, 97:262-267, 2000.
Derisi, J., Iyer, V. and Brosn, P. (1997): Exploring the
metabolic and genetic control of gene expression on a
genomic scale. Science, 278:680-686.
Dudoit, S., Fridlyand, J. and Speed, T. P. (2000):
Comparison of discrimination methods for the
classification of tumors using gene expression data.
Technical Report 576, Department of Statistics,
University of California, Berkeley.
Eisen, M. B., Spellman, P. T., Brown, P. O. and Bostein,
D. (1998): Cluster analysis and display of
genome-wide expression patterns. Proc. of the Natl.
Acad. of Sci. USA, 95:14863-14868.
Eisen, M. B. and Brown, P. O. (1999): DNA arrays for
analysis of gene expression. Methods Enzymbol, 303:
179-205.
Friedman, N., Linial, M., Nachman, I. and Pe'er, D.
(2000): Using Bayesian networks to analyze expression
data. Journal of Computational Biology, 7:601-620.
Fuhrman, S., Cunningham, M. J., Wen, X., Zweiger, G.,
Seilhamer, J. and Somogyi, R. (2000): The application
of Shannon entropy in the identification of putative
drug targets. Biosystems, 55:5-14.
Furey, T. S., Cristianini, N., Duffy, N., Bednarski, D. W.,
Schummer, M. and Haussler, D. (2000): Support vector
machine classification and validation of cancer tissue
samples using microarray expression data.
Bioinformatics, 16(10):906-914.
Golub, T. R., Slonim, D. K., Tamayo, P., Huard, C.,
GaasenBeek, M., Mesirov, J. P., Coller, H., Loh, M. L.,
Downing, J. R., Caligiuri, M. A., Blomfield, C. D., and
Lander, E. S. (1999): Molecular classification of
cancer: Class discovery and class prediction by
gene-expression monitoring. Science, 286:531-537.
Harrington, C. A., Rosenow, C., and Retief, J. (2000):
Monitoring gene expression using DNA microarrays.
Curr. Opin. Microbiol., 3:285-291.
Hartuv, E., Schmitt, A., Lange, J., Meier-Ewert, S.,
Lehrach, H. and Shamir, R. (2000): An algorithm for
clustering cDNA fingerprints. Genomics,
66(3):249-256.
Khan, J., Wei, J. S., Ringner, M., Saal, L. H., Ladanyi, M.,
Westermann, F., Berthold, F., Schwab, M., Antonescu,
C. R., Peterson, C. And Meltzer, P. S. (2001):
Classification and diagnostic prediction of cancers
using gene expression profiling and artificial neural
networks. Nature Medicine, 7(6):673-679.
Kim, H. D. and Cho, S.-B. (2000): Genetic optimization
of structure-adaptive self-organizing map for efficient
classification. Proc. of International Conference on
Soft Computing, 34-39, World-Scientific Publishing.
Lashkari, D., Derisi, J., McCusker, J., Namath, A.,
Gentile, C., Hwang, S., Brown, P., and Davis, R.
(1997): Yeast microarrays for genome wide parallel
genetic and gene expression analysis. Proc. of the Natl.
Acad. of Sci. USA, 94:13057-13062.
Lippman, R. P. (1987): An introduction to computing
with neural nets. IEEE ASSP Magazine, 4-22.
Li, L., Weinberg, C. R., Darden, T. A. and Pedersen, L. G.
(2001): Gene selection for sample classification based
on gene expression data: study of sensitivity to choice
of parameters of the GA/KNN method. Bioinformatics,
17(12):1131-1142.
Li, W. and Yang, Y. (2000): How many genes are needed
for a discriminant microarray data analysis. Critical
Assessment of Techniques for Microarray Data Mining
Workshop.
Lossos, I. S., Alizadeh, A. A., Eisen, M. B., Chan, W. C.,
Brown, P. O., Bostein, D., Staudt, L. M., and Levy, R.
(2000): Ongoing immunoglobulin somatic mutation in
germinal center B cell-like but not in activated B
cell-like diffuse large cell lymphomas. Proc. of the Natl.
Acad. of Sci. USA, 97(18):10209-10213.
Moghaddam, B. and Yang, M.-H. (2000): Gender
classification with support vector machines. Proc. of
4th IEEE Intl. Conf. on Automatic Face and Gesture
Recognition, 306-311.
Nguyen, D. V. and Rocke, D. M. (2002): Tumor
classification by partial least squares using microarray
gene expression data. Bioinformatics, 18(1):39-50.
Quinlan, J. R. (1986): The effect of noise on concept
learning. Machine Learning: An Artificial Intelligence
Approach, Michalski, R. S., Carbonell, J. G. and
Mitchell, T. M. (eds). San Mateo, CA: Morgan
Kauffmann, 2:149-166.
Ryu, J. and Cho, S. B. (2002): Towards optimal feature
and classifier for gene expression classification of
cancer. Lecture Note in Artificial Intelligence,
2275:310-317.
Shamir, R. and Sharan, R. (2001): Algorithmic
approaches to clustering gene expression data. Current
Topics in Computational Biology. In Jiang, T., Smith,
T., Xu, Y. and Zhang, M. Q. (eds), MIT press.
Sharan, R. and Shamir, R. (2000): CLICK: A clustering
with applications to gene expression analysis. Proc. Of
the Eighth International Conference in Computational
Molecular Biology (ISBM), 307-316.
Tamayo, P. (1999): Interpreting patterns of gene
expression with self-organizing map: Methods and
application to hematopoietic differentiation. Proc. of
the National Academy of Sciences of the United States
of America, 96: 2907-2912.
Thieffry, D. and Thomas, R. (1998): Qualitative analysis
of gene networks. Pacific Symposium on Biocomputing,
3:66-76.
Vapnik, V. N. (1995): The Nature of Statistical Learning
Theory, New York: Springer.
Xu, Y., Selaru, M., Yin, J., Zou, T. T., Shustova, V., Mori,
Y., Sato, F., Liu, T. C., Olaru, A., Wang, S., Kimos, M.
C., Perry, K., Desai, K., Greenwood, B. D., Krasna, M.
J., Shibata, D., Abraham, J. M. and Meltzer, S. J.
(2002): Artificial neural networks and gene filtering
distinguish between global gene expression profiles of
Barrett's esophagus and esophageal cancer. Cancer
Research, 62:3493-3497.
| classification;MLP;SASOM;gene expression profile;SVM;KNN;Biological data mining;ensemble classifier;feature selection |
134 | Machine Learning in Low-level Microarray Analysis | Machine learning and data mining have found a multitude of successful applications in microarray analysis, with gene clustering and classification of tissue samples being widely cited examples. Low-level microarray analysis often associated with the pre-processing stage within the microarray life-cycle has increasingly become an area of active research, traditionally involving techniques from classical statistics. This paper explores opportunities for the application of machine learning and data mining methods to several important low-level microarray analysis problems: monitoring gene expression, transcript discovery, genotyping and resequencing . Relevant methods and ideas from the machine learning community include semi-supervised learning, learning from heterogeneous data, and incremental learning. | INTRODUCTION
DNA microarrays have revolutionized biological research over
the short time since their inception [2; 27; 28; 29]. Although
most widely used for parallel measurement of gene
expression [27; 28], microarrays are starting to find common
application in other areas of genomics and transcriptomics,
including genomic re-sequencing [30; 31], genotyping [32;
33], and transcript discovery [34].
Research labs armed with microarrays have been able to
partake in a range of studies, including finding gene function
[35; 36; 37]; correcting mistaken database annotations
[36; 7]; performing linkage analyses; determining specific
genes involved in biological pathways; identifying genes that
are important at certain times of development (or that are
turned on/off over a course of treatment); elucidating gene
regulatory networks [13]; diagnosing disease in tissue sam-Figure
1: The relationship between low-level and high-level
microarray analysis.
ples [38; 39; 40; 41];
tioners' misdiagnoses [38]. The common thread among these
high-level microarray analysis problems is that they answer
sophisticated questions of direct biological interest to medical
researchers (such as "which genes are being co-expressed
under treatment X?"), where the raw data used are estimates
of biologically meaningful parameters (such as the
expression level estimates for thousands of genes).
In contrast to these so-called high-level problems, low-level
microarray analysis [19] is concerned with the preceding step
in the microarray assay cycle (Figure 1) given raw data
straight from a scanner which has no direct biological interpretation
, clean and summarize this data to produce the
biologically meaningful parameter estimates (such as expression
level estimates) that are later used in high-level analyses
.
In low-level analysis, more consideration is generally given to
the behavior of the underlying molecular biology, microarray
technology, and experimental design than in high-level analysis
. This makes generative methods readily applicable in
low-level problems, facilitating the formulation of confidence
SIGKDD Explorations.
and even identifying medical practi
statements such as p-values in gene expression calls. Hence,
while high-level problems have been tackled with discriminative
approaches, such as those found in machine learning and
data mining, in addition to classical statistical methods, the
low-level analysis community has traditionally called upon
only the latter.
In this paper we argue that low-level microarray analysis
poses a number of interesting problems for the data mining
and machine learning community, distinct to the traditional
high-level microarray problems. These problems are relevant
to the long-term success of DNA microarrays and are
already topics of active research in the low-level microarray
analysis community. It is our hope that this position paper
motivates and enables further machine learning research in
the area. Although we will focus on high density oligonucleotide
microarrays, particularly those of the Affymetrix
GeneChip variety, the underlying concepts and opportunities
remain the same for related technologies. Throughout
the paper, we distinguish machine learning from statistics.
While these disciplines are closely related and serve as foundations
for inference in microarray analysis, the distinction
does have content. In our view, classical statistics is generative
, dealing with relatively low-dimensional data and parameter
spaces, while machine learning is often discriminative
in nature and explicitly addresses computational issues
in high-dimensional data analysis.
Section 2 reviews relevant background ideas from machine
learning. For an overview of the background molecular biology
and microarray technology, see the guest editorial elsewhere
in this issue. The low-level problems of absolute and
differential expression level summarization, expression detection
, and transcript discovery are reviewed in Section 3,
along with suggested applications of machine learning approaches
to these problems. Sections 4 and 5 similarly cover
microarray-based genotyping and re-sequencing.
Finally,
Section 6 concludes the paper.
BACKGROUND MACHINE LEARNING
We assume familiarity with the notions of unsupervised learning
(clustering) and supervised learning (classification and
regression). As many of the low-level analysis problems discussed
below are amenable to learning from partially labeled
data, learning from heterogeneous data, and incremental
learning, we briefly review these paradigms here.
2.1
Learning from Partially Labeled Data
Given an i.i.d. labeled sample {(x
i
, y
i
)}
n
i=1
drawn from the
unknown and fixed joint distribution F (x, y), and an i.i.d.
unlabeled sample {x
i
}
m
i=n+1
drawn from the marginal distribution
F (x), the problem of learning from partially labeled
data [22; 20] is to use the data in choosing a function ^
g
m
(X)
approximating E(Y |X) where (X, Y ) F . This problem
has been motivated by a number of applications where only
limited labeled data is present, say due to expense, while unlabeled
data is plentiful [16]. This is particularly the case in
the areas of text classification, medical research, and computer
vision [42], within which much of the research into
learning from partially labeled data has occurred.
This problem, also called the labeled-unlabeled data problem
[42], has been explored under a number of closely-related
guises. Some of the earliest approaches used so-called hybrid
learners [6], where an unsupervised learning algorithm assigns
labels to the unlabeled data, thereby expanding the labeled
dataset for subsequent supervised learning. The term
multimodal learning is sometimes used to refer to partially
labeled learning in the computer vision literature [17]. Co-training
is a form of partially labeled learning where the two
datasets may be of different types and one proceeds by using
the unlabeled data to bootstrap weak learners trained
on the labeled data [16].
More recently, semi-supervised learning [25] and transductive
learning [26] have gained popularity. Equivalent to partially
labeled learning, semi-supervised learning includes a
number of successful algorithms, such as those based on the
support vector machine (SVM) [25; 8]. Transductive learners
, on the other hand, aim to predict labels for just the
unlabeled data at hand, without producing the inductive
approximation ^
g
m
. This approach can be used to generalize
the aforementioned hybrid learners, whose unsupervised
step typically ignores the labeled data. In particular
, it is shown in [26] that direct transduction is more effective
than the traditional two-step approach of induction
followed by deduction. A number of transductive schemes
have been proposed, such as those based on the SVM [4; 25],
a graph-based transductive learner [9], and a leave-one-out
error ridge regression method [26]. Joachims [25] describes
an approximate solver for the semi-supervised SVM which
utilizes a fast SVM optimizer as an inner loop.
The story is not all good. [10] tells us that while unlabeled
data may be useful, labeled examples are exponentially more
valuable in a suitable sense. [43] tells us that unlabeled data
may lead the transductive SVM to maximize the wrong margin
, and in [42] it is shown that unlabeled data may in fact
degrade classifier performance under certain conditions relating
the risk and empirical risk. Nonetheless, learning from
partially labeled data has enjoyed great success in many theoretical
and empirical studies [16; 42; 44; 43].
We are especially interested in partially labeled learning as
an approach to the low-level microarray analysis problems
discussed in Sections 35, where we have relatively few labeled
examples but an abundant source of unlabeled data.
[45] is a recent example of partially labeled learning applied
to high-level microarray analysis.
There, the problem of
predicting gene function is tackled using a semi-supervised
scheme trained on a two-component dataset of DNA microarray
expression profiles and phylogenetic profiles from
whole-genome sequence comparisons. This leads us to the
next relevant idea from machine learning.
2.2
Learning from Heterogeneous Data
Learning from heterogeneous data is the process of learning
from training data, labeled or not, that can be partitioned
into subsets, each of which contains a different type of data
structure or originates from a different source. This notion
is equivalent to the methods of data fusion [5].
Research into learning from heterogeneous data tends to
be quite domain-specific and has enjoyed increasing interest
from the bioinformatics community in particular (e.g., [18]).
[46] presents a kernel-based framework for learning from heterogeneous
descriptions of a collection of genes, proteins or
other entities. The authors demonstrate the method's superiority
to the homogeneous case on the problem of predicting
yeast protein function using knowledge of amino acid
sequence, protein complex data, gene expression data, and
known protein-protein interactions.
SIGKDD Explorations.
Volume 5,Issue 2 - Page 131
[37] proposes an SVM method for classifying gene function
from microarray expression estimates and phylogenetic profiles
. This is achieved through the construction of an explicitly
heterogeneous kernel: first separate kernels are constructed
for each data type, taking into account high-order
within-type correlations, then these kernels are combined,
ignoring high-order across-type correlations.
Our interest in learning from heterogeneous data arises because
several sources of knowledge relevant to low-level microarray
analysis are available, and incorporating such problem
domain knowledge has been shown to improve the performance
of learning algorithms in the past.
2.3
Incremental Learning
Incremental learning is focused on learning from data presented
sequentially, where the model may be required to
make predictions on unseen data during training. This is in
contrast to cases where all training occurs before any predictions
are made (batch learning ), and is similar to online
learning [24].
A number of incremental learning algorithms have been proposed
and applied in the literature. For example, several
incremental support vector machines have been studied [24;
21; 47]. In [48], incremental learning is applied to distributed
video surveillance. SVM algorithm parameter selection is
investigated in [47]. [21] applies an incremental SVM to detecting
concept drift the problem of varying distributions
over long periods of data gathering and to adaptive classification
of documents with respect to user interest. An exact
incremental SVM is proposed in [24], where decremental unlearning
of incremental training data is possible. This can
be used to efficiently evaluate the computationally-expensive
leave-one-out error measure.
Due to the relatively small sizes of datasets typically available
in low-level microarray analysis, there is great potential
for learners that can incrementally incorporate new data
gathered in the lab, thereby improving estimator performance
specific to that lab's patterns of microarray assay.
EXPRESSION ANALYSIS
The most successful application of DNA microarray technology
to date has been to gene expression analysis. Tra-ditionally
, this has involved estimating gene expression levels
(Section 3.1), an area that is being addressed through
successful statistical methods and active statistics research.
However, the task of determining transcription activity over
entire chromosomes (Section 3.2) is less well developed and
offers serious opportunities for machine learning.
3.1
Gene Expression Monitoring
3.1.1
The Problem
Traditional microarrays measure mRNA target abundance
using the scanned intensities of fluorescence from tagged
molecules hybridized to substrate-attached probes [29]. The
brighter the intensity within a cell of identical probes, the
more hybridization there has been to those probes (Figure
2a). The scanned intensity, then, roughly corresponds
to target abundance.
Since probes are limited in length while targets may be thousands
of bases long, the GeneChip uses a set of probes to
detect each target nucleic acid. The probes are spread out
Figure 2: Probe-level features for expression level summarization
: (a) a cell of probes; (b) target transcript, perfect
match probe and mis-match probe sequences; and (c)
scanned and image-analyzed probe-level intensities.
along a 600 base pair region close to the 3' end of the transcript
. To measure the effects of cross-hybridization, or un-intended
hybridization of target A to the probes intended for
target B, a system of probe pairs is used. In each pair, a perfect
match (PM) probe contains the target's exact complementary
sequence, while a mismatch (MM) probe replaces
the middle base of the perfect match probe with its Watson-Crick
complement. In this way, a target is probed by a probe
set of 11-20 PM-MM probe pairs. The aim is roughly for
the PMs to measure signal plus noise and for the MMs to
measure just noise, so that the signal is revealed using some
function of (PM - MM). Figure 2b depicts the probe set arrangement
, while Figure 2c gives an example of the scanned
intensities. We may now define the expression level summarization
problem.
Low-level Problem 1. Given a probe set's intensities (possibly
after background correction and normalization), the
expression level summarization problem is to estimate the
amount of target transcript present in the sample.
While the expression level summary aims to estimate gene
expression level from the features of Figure 2, expression
detection is concerned with determining the presence of any
gene expression at all.
Low-level Problem 2. Given a probe set's intensities, possibly
normalized, the expression detection problem is to predict
whether the target transcript is present (P) or absent
(A) in the sample, or otherwise call marginal (M) if it is too
difficult to tell. In addition to the P/M/A detection call, we
wish to state a confidence level in the call, such as a p-value.
Detection calls are not as widely utilized as expression level
estimates. They are often used, for example, to filter out
genes with negligible expression before performing computationally
-expensive high-level analyses, such as clustering on
gene expression profiles.
SIGKDD Explorations.
Volume 5,Issue 2 - Page 132
The previous two problems dealt with estimates based on a
single probe-set read from a single array. Comparative studies
, on the other hand, involve assaying two arrays, one the
baseline and the other the experiment, followed by computation
of a single comparative estimate.
Low-level Problem 3. Given two sets of intensities, possibly
normalized, for the same probe set on two arrays:
a. The differential expression level summarization problem
is to estimate the relative abundance of target transcript
on each array.
b. The comparison call problem is to predict whether the
expression of the target has increased, not changed, or
decreased from one chip to the other. As in Low-level
Problem 2, a statement of confidence in the call should
be supplied.
The log-ratio of expression levels for a target is sometimes
known as the relative expression level [3] and is closely related
to the notion of fold change (which is sign(log-ratio)
2log-ratio). Comparison calls are sometimes referred to as
change calls. An advantage of working with these comparative
estimates is that probe-specific affinities (one cause
of undesired variation) are approximately cancelled out by
taking ratios [3].
All of these problems are complicated by exogenous sources
of variation which cloud the quantities we are interested in.
[49] proposes a breakdown of the sources of variation in microarray
experiments into intrinsic noise (variation inherent
in the experiment's subjects), intermediate noise (arising
for example from laboratory procedures), and measurement
error (variation due to the instrumentation, such as array
manufacture, scanning, or in silico processing).
3.1.2
Current Approaches
At the level of microarray design, sophisticated probe modeling
and combinatorial techniques are used to reduce probe-specific
effects and cross-hybridization. However, much of
the unwanted variation identified above must still be tackled
during low-level analysis. This means that care must
be taken with the relevant statistical issues. For example,
in experimental design, we must trade off between biological
replicates (across samples) and technical replicates (one
sample across chips). Background correction and normalization
, for reducing systematic variation within and across
replicate arrays, also surface as major considerations [19;
11].
Three popular approaches to Low-level Problem 1 [11] are
the Affymetrix microarray suite (MAS) 5.0 signal measure
[14; 3; 1], the robust multi-array average (RMA) [50; 11]
and the model-based expression index (MBEI) [51].
MAS5 first performs background correction by subtracting a
background estimate for each cell, computed by partitioning
the array into rectangular zones and setting the background
of each zone to that zone's second-percentile intensity. Next
MAS5 subtracts an "ideal mismatch value" from each PM
intensity and log-transforms the adjusted PMs to stabilize
the variance. A robust mean is computed for the resulting
values using a biweight estimator, and finally this value is
scaled using a trimmed mean to produce the signal estimate.
RMA proceeds by first performing quantile normalization
[52], which puts the probe intensity distributions across replicate
arrays on the same scale. RMA then models the PMs
Figure 3: An ROC curve: (0, 0) and (1, 1) correspond to
the "always negative" and "always positive" classifiers respectively
. The closer to the ideal point (0, 1) the better.
Neither of the two families A or B dominates the other. Instead
, one or the other is better according to the desired
trade-off between FP and TP.
as background plus signal, where the signal is exponentially
and the background normally distributed MM intensities
are not used in RMA. A robust additive model is used to
model the PM signal (in log-space) as the sum of the log
scale expression level, a probe affinity effect, and an i.i.d.
error term. Finally, median polish estimates the model parameters
and produces the log-scale expression level summary
.
MBEI fits P M
i,j
-M M
i,j
=
i
j
+
i,j
, using maximum likelihood
to estimate the per-gene expression levels
i
. Here the
j
are probe-specific affinities and the
i,j
are i.i.d. normal
errors.
Although it may seem that expression detection is just a
matter of thresholding expression level estimates, this has
proven not to be the case [53]. It is known that expression
level estimators often have difficulty at low levels of expression
, while detection algorithms are designed with this
setting in mind.
The most widely used detection algorithm for the GeneChip
is a method based on a Wilcoxon signed-rank test [54; 3;
55]. This algorithm corresponds to a hypothesis test of H
0
:
median(
P M
i
-M M
i
P M
i
+M M
i
) = versus H
1
: median(
P M
i
-M M
i
P M
i
+M M
i
) >
, where is a small positive constant. These hypotheses
correspond to absence and presence of expression, respectively
. The test is conducted using a p-value for a sum of
signed ranks R
i
=
P M
i
-M M
i
P M
i
+M M
i
- . The p-value is thresholded
so that values in [0,
1
), [
1
,
2
), and [
2
, 1] result
in present, marginal, and absent calls, respectively. Here
0 <
1
<
2
< 0.5 control the trade-off between false positives
(FP) and true positives (TP).
Recently, a number of alternate rank sum-based algorithms
have been proposed [53]. One in particular a variant on
the MAS5 method where scores are set to R
i
= log
P M
i
M M
i
has been shown to outperform MAS5 detection in a range
of real-world situations.
One aspect of the study in [53]
of particular interest is the use of the Receiver Operating
Characteristic (ROC) Convex Hull method [56] for comparing
competing classifiers on a spike-in test set.
ROC curves (see Figure 3) characterize the classification
performance of a family of classifiers parameterized by a tun-SIGKDD
Explorations.
Volume 5,Issue 2 - Page 133
able parameter that controls the FP-TP trade-off. For example
, as the level of a hypothesis test is decreased, the rate
of false positive rejections decreases (by definition), while
the rate of false negative acceptances will typically go up.
An ROC curve encodes this trade-off, extending the notion
of contingency table to an entire curve. It is a more expressive
object than accuracy, which boils performance down to
one number [56; 57].
Comparing ROC curves has traditionally been achieved by
either choosing the "clear winner" (in the rare case of domination
[57]), or choosing the maximizer of the Area Under
Curve (AUC). Although AUC works in some cases, it gives
equal credit to performance over all misclassification cost
and class size settings usually an undesirable strategy if
any domain knowledge is available. The ROC Convex Hull
method, on the other hand, relates expected-cost optimality
to conditions on relative misclassification cost and class size,
so that the typical case of semi-dominance (as in Figure 3)
can be handled in a principled way rather than selecting
p-value thresholds by hand, end-users are provided with the
right classifier and thresholds by the method. This use of
the ROCCH method demonstrates a surprising application
of machine learning to low-level microarray analysis.
Many of these absolute expression algorithms have their
comparative analogues. For example, MAS5 produces the
signal log ratio with an associated confidence interval, using
a biweight algorithm [14; 3]. MAS5 also implements a
comparison call based on the Wilcoxon signed-rank sum test,
just as in the absolute MAS5 detection algorithm above [55].
While the Affymetrix microarray suite is the software package
bundled with the GeneChip, the Bioconductor project
[15] an open-source set of R [12] packages for bioinformatics
data analysis has been gaining popularity and implements
most of the methods discussed here.
3.1.3
Open Problems
While Low-level Problem 1 involves prediction of continuous
expression levels (non-negative real values) given a vector of
(non-negative real) perfect match and mismatch intensities,
with total length between 22 and 40, Low-level Problem 2 is
a 3-class classification problem with call confidence levels.
Open Problem 1. In the respective settings of Low-level
Problems 13:
a. What machine learning techniques are competitive with
algorithms based on classical statistical methods for expression
level estimation?
b. Which machine learning classifiers are competitive for expression
detection?
c. What machine learning methods achieve high performance
on the comparative analogues of the previous two problems
, posed on the appropriate product space of microarray
measurements?
Comparisons for expression level estimators might be made
based on bias and variance, computational efficiency, and biological
relevance of learned models. The ROCCH method
is ideal for detector comparison. Issues of background correction
and normalization across multiple arrays must likely
also be addressed to enable competitiveness with the state
of the art.
Research into applying semi-supervised, heterogeneous data
and incremental learners to gene expression monitoring is directly
motivated by the proportion of labeled to unlabeled
data available, the existence of GeneChip domain knowledge
, and the endemic nature of microarray assays that are
continually performed in individual research labs. Biologists
could augment the limited labeled probe-level data available
with relatively abundant unlabeled data. Labeled data can
be procured, for example, from bacterial control experiments
with known concentrations, called spike-in assays, and bacterial
control probe sets that are present in some GeneChips
for calibration purposes. The former source of labeled data
is the more useful for this problem, as it provides examples
with a range of labels. Unfortunately, spike-in studies are
rare because they are not of independent scientific interest:
they are only performed for low-level microarray research.
For the few spike-in assays that are available, only a small
number of targets are spiked in at an equally small number of
concentrations (typically 10). Unlabeled data, in contrast,
could be taken from the large collection of available biologically
relevant assays; each one providing tens of thousands
of data points. Beyond probe intensities, other data sources
could include probe sequences and probe-affinity information
derived from probe models. Such information is closely
related to the hybridization process and might be of use
in expression level estimation: both target and non-specific
hybridization are known to be probe-dependent. Although
labeled data from spike-in studies are of greatest utility for
learning [10], the quantity of unlabeled data produced by
a series of biologically interesting microarray assays in any
given lab suggests a semi-supervised incremental approach.
Since the ROCCH involves taking a pointwise maximum
over the individual noisy ROC curves, it incorporates a possibly
large degree of uncertainty. It should be possible to
extend the results of [53] to quantify this property.
Open Problem 2. Can the ROC Convex Hull method
of [56] be extended to provide confidence intervals for its
conditions on expected-cost optimality?
3.2
Transcript Discovery
3.2.1
The Problem
The applications to expression monitoring described above
are all related to addressing questions about pre-defined
transcripts.
More precisely, the vast majority of expression
analysis is performed using probes interrogating only
a small sub-sequence of each transcript. This has clearly
been a useful approach, but there are at least two potential
drawbacks. One is that we can only monitor the expression
of genes known to exist at the time of the array's design.
Even in a genome as well-studied as that of the human, new
transcripts are routinely discovered. Another is that in directly
monitoring only a sub-sequence of the transcript, it
will often be impossible to distinguish between alternatively
spliced forms of the same gene (which may have very different
functional roles).
An alternative approach is to use arrays with probes tiled
uniformly across genomic sequence, without regard to current
knowledge of transcription.
Such genome tiling arrays
have been used to monitor expression in all the non-repetitive
sequence of human chromosomes 21 and 22 [34],
and more widespread use is underway.
SIGKDD Explorations.
Volume 5,Issue 2 - Page 134
The problems arising in the analysis of data from genome
tiling arrays are essentially the same as those for the expression
monitoring arrays described above: estimation of
expression level, detection of presence, and detection of differential
expression. There is, however, the additional challenge
of determining the number of distinct transcripts and
their location within the tiled genomic region.
Low-level Problem 4. The problem of transcript discovery
can be viewed in two steps:
a. Determining the exon structure of genes within a tiled
region; and
b. Determining which exons should be classified together as
part of a single gene's transcript.
3.2.2
Current Approaches
A simple heuristic approach is taken in [34], in which PM-MM
probe pairs are classified as positive or negative based
on thresholds applied to the difference and ratio of the PM
and MM values. Positions classified as positive and located
close to other positive positions are grouped together to form
predicted exons.
A more effective approach [58] is based on the application of
a Wilcoxon signed-rank test in a sliding window along the
genomic sequence, using the associated Hodges-Lehmann estimator
for estimation of expression level. Grouping into
exons is achieved by thresholding on present call p-values or
estimated expression level, then defining groups of probes
exceeding the threshold to be exons.
3.2.3
Open Problems
The problem of detecting exons based on probe intensities
(Low-level Problem 4a) is very similar to the problem of
absolute expression detection (Low-level Problem 2). For
example, the exon detection method of [58] and the MAS5
expression detection algorithm [55] are both built around
the Wilcoxon signed-rank test. The problem of finding exons
has been addressed as described, but the methods are
heuristic and there is plenty of room for improvement. Associating
exons to form transcripts (Low-level Problem 4b) has
been addressed in a large experiment across almost 70 experimental
pairs using a heuristic correlation-based method;
again, this presents an opportunity for research into more
effective methods.
Open Problem 3. Are there machine learning methods
that are able to out-perform current classical statistical methods
in transcript discovery as defined in Low-Level Problem
4?
One possibility which appears well suited to the problem is
the use of hidden Markov models where the underlying un-observed
Markov chain is over states representing expressed
versus non-expressed sequence. The distribution of the observed
probe intensities would depend on the underlying hidden
state. Another possible approach, considering the success
which has been demonstrated in predicting genes from
sequence data alone, would also be to integrate array-derived
data with sequence information in prediction of transcripts.
GENOTYPING
Descriptions of genome sequencing efforts such as the human
genome project often lend the impression that there
is a unique genomic sequence associated with each species.
This is a useful and approximately correct abstraction. But
in fact, any two individuals picked at random from a species
population will have differing nucleotides at a small fraction
of the corresponding positions in their genomes. Such single-nucleotide
polymorphisms, or SNPs, help form the basis of
genetically-determined variation across individuals. Biologists
estimate that about one position in 1,000 in the human
genome is a SNP. With over 3 billion bases of genomic DNA,
we see that SNPs number in the several millions. Although
there are other kinds of individual genomic variation, such
as insertions, deletions, and duplications of DNA segments,
our focus here is SNPs.
Further complicating the picture is the fact that humans are
diploid organisms--each person possesses two complete but
different copies of the human genome, one inherited from the
mother and one from the father. Now consider a polymorphic
position, or locus, at which two different bases occur
in the population, say G and T. These variants are called
the alleles at the locus, so in this case we are describing a
biallelic SNP. A given individual will have inherited either a
G or T in the paternal genome, and the same is true of the
maternal genome. Thus there are three possible genotypes,
or individual genetic signatures, at this SNP: they are de-noted
GG, TT, and GT. We do not distinguish the last case
from TG, since there is no inherent ordering of the paternal
and maternal genomes at a given polymorphic position.
We refer generically to the alleles of a biallelic SNP as A and
B. Biological evidence suggests that essentially all SNPs are
biallelic in humans. The genotyping problem, then, is to establish
an individual's genotype as AA, BB, or AB for as
many SNPs as possible in the human genome. The completion
of the human genome project means that one has
recourse to the full genomic sequence surrounding a SNP
to help solve the genotyping problem. Furthermore, various
large-scale public projects to locate SNPs and identify their
alleles exist, notably The SNP Consortium (TSC); the data
they generate may also be utilized for genotyping.
The major drawback to traditional genotyping protocols are
their lack of parallelism, with consequent expense in terms
of material and labor. In contrast, Kennedy et al. [33] describe
whole-genome sampling analysis (WGSA), which enables
massively parallel genotyping via genotyping microarrays
.
For the Affymetrix Mapping 10k Array, which genotypes
approximately 10,000 SNPs across the human genome, each
SNP actually has 56 corresponding probes, collectively termed
a miniblock. The miniblock has 7 probe quartets for the
SNP's flanking region on the forward strand and another 7
probe quartets for the reverse complement strand, so 4 7
2 yields 56 probes. Each probe quartet in turn corresponds
to a 25-mer in which the SNP is at one of 7 offsets from the
central position. The four probes within a probe quartet
differ in the base they put at the SNP: a perfect match to
the A allele, a perfect match to the B allele, and mismatches
for each.
Low-level Problem 5. Given a SNP's 56-vector of miniblo-SIGKDD
Explorations.
Volume 5,Issue 2 - Page 135
ck probe intensities, the genotype calling problem is to predict
the individual's corresponding alleles as AA, BB or AB.
Write PM(A), PM(B), MM(A), and MM(B) for the probe
intensities within a quartet. We would then hope that an
AA individual has PM(A) > MM(A) but PM(B) MM(B),
for all probe quartets on both strands. For a BB individual,
we hope to find just the opposite effect, and an AB individual
should have both PM(A) > MM(A) and PM(B) >
MM(B). The mismatch probes in each quartet act as controls
, establishing the level of nonspecific hybridization for
their corresponding perfect match probes. The presence of
multiple probe quartets allows for the determination of genotype
even when one strand and/or some offsets do not yield
reliable hybridization, say for biochemical reasons.
4.2
Current Approaches
Low-level Problem 5 is a three-class classification problem.
In many machine learning applications, the metric of interest
for competing classifiers is predictive accuracy, in this case
the probability of correctly genotyping a new individual's
SNP based on the miniblock vector. However, in the kinds
of genetic studies which take large numbers of genotypes as
input, there is usually an explicit requirement that genotype
predictions have a prespecified accuracy, often 99%.
To attain such accuracy, it is usually permissible for some
fraction of genotypable SNPs to be no-calls; that is, the classifier
can refuse to predict a genotype for some miniblocks.
When comparing genotypers, our interest therefore lies in
the trade-off between the rate of no-calls and the accuracy
attained on those SNPs which are called. For example, some
studies consider the punt rate, or lowest no-call rate which
yields a prespecified accuracy level on the called SNPs.
A simple unsupervised approach to training a genotyper
is to ignore available labels during training, instead using
these labels to subsequently assess the trade-off between accuracy
and no-call rate for the trained model. This is the
strategy pursued by MPAM (modified partitioning around
medoids) [59], the discriminative clustering genotyper used
for the Affymetrix 10k array. An alternative approach, using
a parametric generative model for the clustering, will
be described elsewhere.
It resembles ABACUS, a model
studied in the context of re-sequencing microarrays [31] (see
Section 5).
4.3
Open Problems
Open Problem 4. Are there machine learning methods
that are able to meet typical accuracy and punt-rate specifications
on the genotype calling problem?
In order to choose a genotyper using supervised learning,
we need labels (true genotypes) along with corresponding
miniblock reads from genotyping arrays. Unfortunately, there
is no large-scale set of publicly available genotypes. Instead
, one makes do with modestly-sized sets of genotypes
available commercially from companies using smaller-scale
techniques. Of course, no genotyping method is error-free,
so in practice one measures concordance with reference genotypes
.
If the concordance is high enough, the remaining
cases of disagreement between a candidate genotyper and
the reference genotypes can be resolved via the older labor-intensive
methods. The incomplete nature of reference genotype
data leads naturally to the setting of semi-supervised
learning. Rather than falling back to unsupervised methods
such as those described above, we may consider employing
more general semi-supervised learners as described in Section
2.1. Additionally, the methods of [23] could be used
to incorporate low-level physical parametric models of hybridization
into a kernel-based classifier.
RE-SEQUENCING
As explained in Section 4, within a single species genomic
sequence will vary slightly from one individual to the next.
While Low-level Problem 5 focuses on the determination of
genotype at a position known in advance to be polymorphic,
the problem described in this section concerns locating such
polymorphic sites in the first place.
The usual starting point is a newly-sequenced genome, such
as the recently-finished human genome. It is often the case
that, based on previous research, an investigator will be interested
in detailed study of variation in a particular genomic
region (say on the order of tens or hundreds of kilo-bases
) and wants to re-sequence this region in a large number
of individuals. Such re-sequencing allows for identification
of the small subset of polymorphic locations. Here
we consider the more recent challenges of microarray-based
re-sequencing of diploid genomic DNA.
A typical re-sequencing array uses eight probes to interrogate
each base of the monitored sequence. These eight
probes comprise two quartets, one for the forward strand
and one for the reverse. Each quartet is formed of 25-mer
probes perfectly complementary to the 25 bases of the reference
sequence centered on the interrogated base, but with
all four possible bases used at the central position.
Low-level Problem 6. The goal of the re-sequencing problem
is to start with a set of probe intensities and classify
each position as being one of A, C, G, T, AC, AG, AT, CG,
CT, GT, or N, where N represents a `no call' (due to sample
failure or ambiguous data).
The intuition is that for a homozygous position, one of the
four probes should be much brighter relative to the others
on each strand, and for a heterozygous position, two probes
corresponding to the two bases of a SNP should be brighter
on each strand. Of particular interest are positions in which
the called base is heterozygous, or homozygous and different
to the reference sequence, as such positions exhibit polymorphism
and are candidate positions for explaining phenotypic
differences between individuals.
At face value, this classification problem is much harder than
the genotyping problem. There are fewer probes to start
with (a miniblock of 8 rather than 40 or more) and more
categories (11 as opposed to 3 or 4) into which to classify.
5.2
Current Approaches
The most recent analysis of the kind of re-sequencing array
discussed here [31] is based on modeling pixel intensities
within each probe as independent random variables with
a common mean and variance. The model for a homozygous
base is that, on each strand, the probe correspond-three
probes have another. The means and variance are estimated
by maximum likelihood, and the likelihood of the
SIGKDD Explorations.
ing to the base has one mean and variance, and the other
Volume 5,Issue 2 - Page 136
model is evaluated. The model for each of the six heterozygous
possibilities is similar, except two probes correspond to
each heterozygote model and the other two are background.
The likelihoods (overall and for each strand) are converted
to scores and, provided the maximum score exceeds some
threshold, the best-scoring model is chosen as the base call.
A number of other filters that deal with the signal absence,
signal saturation, sample failure, and so on are applied, as is
an iterative procedure to account for bias in the background
probes. This method, called ABACUS, was found to make
base calls at over 80% of all bases, with an estimate accuracy
in excess of 99% at the bases which were called.
5.3
Open Problems
A good base-calling method for re-sequencing arrays already
exists in ABACUS, but there remains room for improvement
. A recent and improved implementation [60] of the
ABACUS method on a new genomic region found the overall
sequencing accuracy to be on the order of 99.998%, but
the accuracy on heterozygote calls to be about 96.7%. Biologists
would value highly an improvement in heterozygote
call accuracy.
Open Problem 5. Can a supervised learning method be
used to call bases in re-sequencing arrays with accuracy, in
particular heterozygote accuracy, in excess of the accuracies
achieved by the more classic statistical approaches used to
date?
Considering the ongoing efforts of SNP detection projects,
there is an abundance of labeled data available, so the problem
seems quite amenable to machine learning approaches.
As with the genotyping problem, it would be desirable to
have a measure of confidence associated with base calls. It
may also be useful to take into account the sequences of the
25-mer probes, as there are known sequence-specific effects
on the probe intensities.
CONCLUSIONS
We have described a variety of low-level problems in microarray
data analysis and suggested the applicability of
methods from several areas of machine learning. Some properties
of these problems which should be familiar to machine
learning researchers include high-dimensional observations
with complicated joint dependencies (probe intensities
), partially labeled data sets (expression levels, genotypes
), data from disparate domains (microarray assays,
probe sequences, phylogenetic information), and sequential
observations (ongoing experimental work at individual labs).
We pointed out the suitability of semi-supervised, heterogeneous
, and incremental learning in these settings. It is worth
remarking that analogous problems arise with other high-throughput
technologies, such as cDNA and long oligonucleotide
microarrays, mass spectrometry, and fluorescence-activated
cell sorting.
There are other issues in low-level analysis we did not cover.
Here we mention two of these. Image analysis is the problem
of going from raw pixel values in the scanned image of a
microarray to a set of pixel intensities for each feature placed
on the probe, and then to single-number probe intensities.
The surface of the GeneChip contains detectable grid points
which facilitate rotation and translation of the image to a
canonical alignment; subsequent mapping of each pixel to a
feature is semi- or fully automated and has not previously
raised major analysis issues. However, work is being done
on aggressive reduction of feature sizes to a scale where this
mapping procedure could become a central concern.
On the more theoretical side, probe models based on the
physics of polymer hybridization have recently been the focus
of considerable interest. These models reflect a significant
increase in the use of biological knowledge for estimating
target abundance and present an opportunity for application
of machine learning techniques which can exploit
parametric distributions in high-dimensional data analysis,
such as graphical models.
We close by observing that a fuller awareness of low-level
microarray analysis issues will also benefit machine learning
researchers involved with high-level problems: the inevitable
information reduction from earlier stage to later could well
conceal too much of what the unfiltered array data reveal
about the biological issue at hand. Familiarity with initial
normalization and analysis methods will allow the high-level
analyst to account for such a possibility when drawing scientific
conclusions.
ACKNOWLEDGMENTS
We thank Rafael Irizarry, Ben Bolstad, Francois Collin and
Ken Simpson for many useful discussions and collaboration
on low-level microarray analysis.
REFERENCES
[1] Affymetrix.
Affymetrix
Microarray
Suite
Guide.
Affymetrix Inc., Santa Clara, CA, 2001. version 5.0.
[2] M. Schena. DNA Microarrays: A Practical Approach.
Oxford University Press, 1999.
[3] Affymetrix. Statistical algorithms description document
. Whitepaper, Affymetrix Inc., Santa Clara, CA,
2002.
[4] A. Gammerman, V. Vovk, and V. Vapnik. Learning by
transduction. In Fourteenth Conference on Uncertainty
in Artificial Intelligence, pages 148155. Morgan Kaufmann
Publishers, 1998.
[5] P. K. Varshney. Scanning the issue: Special issue on
data fusion. Proceedings of the IEEE, 85:35, 1997.
[6] R. O. Duda and P. E. Hart. Pattern Classification and
Scene Analysis. John Wiley and Sons, New York, 1973.
[7] T. Gaasterland and S. Bekiranov. Making the most of
microarray data. Nature Genetics, 24:204206, 2000.
[8] K. P. Bennett and A. Demiriz. Semi-supervised support
vector machines. In Advances in Neural Information
Processing Systems 11, pages 368374, Cambridge,
MA, 1999. MIT Press.
[9] T. Joachims. Transductive learning via spectral graph
partitioning. In Proceedings of the International Conference
on Machine Learning (ICML), 2003.
[10] V. Castelli and T. Cover. On the exponential value of
labeled samples. Pattern Recognition Letters, 16:105
111, 1995.
SIGKDD Explorations.
Volume 5,Issue 2 - Page 137
[11] R. A. Irizarry. Science and Statistics: A Festschrift for
Terry Speed, volume 40 of Lecture NotesMonograph
Series,
chapter Measures of gene expression for
Affymetrix high density oligonucleotide arrays, pages
391402. Institute of Mathematical Statistics, 2003.
[12] R. Ihaka and R. Gentleman. R: A language for data
analysis and graphics. Journal of Computational and
Graphical Statistics, 5(3):299314, 1996.
[13] N. Friedman. Probabilistic models for identifying regulation
networks. Bioinformatics, 19:II57, October 2003.
[14] E. Hubbell, W. M. Liu, and R. Mei. Robust estimators
for expression analysis. Bioinformatics, 18:15851592,
2002.
[15] Bioconductor Core. An overview of projects in computing
for genomic analysis. Technical report, The Bioconductor
Project, 2002.
[16] A. Blum and T. Mitchell. Combining labeled and unlabeled
data with co-training. In Proceedings of the Workshop
on Computational Learning Theory. Morgan Kaufmann
Publishers, 1998.
[17] L. Wu, S. L. Oviatt, and P. R. Cohen. Multimodal integration
- a statistical view. IEEE Transactions on Mul-timedia
, 1:334 341, 1999.
[18] A. J. Hartemink and E. Segal. Joint learning from multiple
types of genomic data. In Proceedings of the Pacific
Symposium on Biocomputing 2004, 2004.
[19] G. K. Smyth, Y. H. Yang, and T. P. Speed. Functional
Genomics: Methods and Protocols, volume 224 of Methods
in Molecular Biology, chapter Statistical issues in
cDNA microarray data analysis, pages 111136. Hu-mana
Press, Totowa, NJ, 2003.
[20] A. Blum and S. Chawla. Learning from labeled and
unlabeled data using graph mincuts. In International
Conference on Machine Learning (ICML), 2001.
[21] R. Klinkenberg and T. Joachims. Detecting concept
drift with support vector machines. In P. Langley, editor
, Proceedings of ICML-00, 17th International Conference
on Machine Learning, pages 487494, Stanford,
CA, 2000. Morgan Kaufmann Publishers.
[22] M. Szummer and T. Jaakkola. Partially labeled classification
with Markov random walks. In Neural Information
Processing Systems (NIPS), 2001.
[23] T. S. Jaakkola and D. Haussler. Exploiting generative
models in discriminative classifiers. In Advances in Neural
Information Processing Systems 11: Proceedings of
the 1998 Conference, pages 487493. MIT Press, 1998.
[24] G. Cauwenberghs and T. Poggio. Incremental and
decremental support vector machine learning. In NIPS,
pages 409415, 2000.
[25] T. Joachims. Transductive inference for text classification
using support vector machines. In I. Bratko and
S. Dzeroski, editors, Proceedings of the 16th Annual
Conference on Machine Learning, pages 200209. Morgan
Kaufmann, 1999.
[26] O. Chapelle, V. Vapnik, and J. Weston. Advances
in Neural Information Processing Systems 12, chapter
Transductive inference for estimating values of functions
. MIT Press, 2000.
[27] M. Schena, D. Shalon, R. W. Davis, and P. O.
Brown. Quantitative monitoring of gene expression patterns
with a complementary DNA microarray. Science,
270:467470, 1995.
[28] D. J. Lockhart, H. Dong, M. C. Byrne, M. T. Follet-tie
, M. V. Gallo, M. S. Chee, M. Mittmann, C. Wang,
M. Kobayashi, H. Horton, and E. L. Brown. Expression
monitoring by hybridization to high-density oligonucleotide
arrays. Nature Biotechnology, 14:16751680,
1996.
[29] R. J. Lipshutz, S. P. A. Fodor, T. R. Gingeras, and
D. H. Lockhart. High density synthetic oligonucleotide
arrays. Nature Genetics, 21:2024, 1999. Supplement.
[30] J. B. Fan, D. Gehl, L. Hsie, K. Lindblad-Toh, J. P.
Laviolette, E. Robinson, R. Lipshutz, D. Wang, T. J.
Hudson, and D. Labuda. Assessing DNA sequence variations
in human ests in a phylogenetic context using
high-density oligonucleotide arrays. Genomics, 80:351
360, September 2002.
[31] D. J. Cutler, M. E. Zwick, M. M. Carrasquillo, C. T.
Yohn, K. P. Tobin, C. Kashuk, D. J. Mathews,
N. A. Shah, E. E. Eichler, J. A. Warrington, and
A. Chakravarti. High-throughput variation detection
and genotyping using microarrays. Genome Research,
11:19131925, November 2001.
[32] J. B. Fan, X. Chen, M. K. Halushka, A. Berno,
X. Huang, T. Ryder, R. J. Lipshutz, D. J. Lockhart
, and A. Chakravarti. Parallel genotyping of human
SNPs using generic high-density oligonucleotide tag arrays
. Genome Research, 10:853860, June 2000.
[33] G. C. Kennedy, H. Matsuzaki, D. Dong, W. Liu,
J. Huang, G. Liu, X. Su, M. Cao, W. Chen, J. Zhang,
W. Liu, G. Yang, X. Di, T. Ryder, Z. He, U. Surti,
M. S. Phillips, M. T. Boyce-Jacino, S. P. A. Fodor, and
K. W. Jones. Large-scale genotyping of complex DNA.
Nature Biotechnology, October 2003.
[34] P. Kapranov, S. E. Cawley, J. Drenkow, S. Bekiranov,
R. L. Strausberg, S. P. A. Fodor, and T. R. Gingeras.
Large-scale transcriptional activity in chromosomes 21
and 22. Science, 296:916919, 2002.
[35] M. Brown, W. N. Grundy, D. Lin, N. Cristianini,
C. Sugnet, M. Ares Jr, and D. Haussler. Support vector
machine classification of microarray gene expression
data. Technical Report UCSC-CRL-99-09, Department
of Computer Science, University of California at Santa
Cruz, 1999.
[36] M. P. S. Brown, W. N. Grundy, D. Lin, N. Cristianini,
C. W. Sugnet, T. S. Furey, M. Ares Jr, and D. Haussler.
Knowledge-based analysis of microarray gene expression
data by using support vector machines. Proceedings
of the National Academy of Sciences, 97:262267,
1997.
SIGKDD Explorations.
Volume 5,Issue 2 - Page 138
[37] P. Pavlidis, J. Weston, J. Cai, and W. N. Grundy.
Gene functional classification from heterogeneous data.
In Proceedings of the Fifth International Conference on
Computational Molecular Biology, pages 242248, 2001.
[38] T. S. Furey, N. Cristianini, N. Duffy, D. W. Bednarski,
M. Schummer, and D. Haussler. Support vector machine
classification and validation of cancer tissue samples
using microarray expression data. Bioinformatics,
16:906914, 2000.
[39] S. Mukherjee,
P. Tamayo,
D. Slonim,
A. Verri,
T. Golub, J. P. Mesirov, and T. Poggio. Support vector
machine classification of microarray data. Technical
Report 182, Center for Biological and Computational
Learning Massachusetts Institute of Technology, 1998.
[40] S. Ramaswamy, P. Tamayo, R. Rifkin, S. Mukherjee,
C. Yeang, M. Angelo, C. Ladd, M. Reich, E. Latulippe,
J. P. Mesirov, T. Poggio, W. Gerald, M. Loda, E. S.
Lander, and T. R. Golub. Multiclass cancer diagnosis
using tumor gene expression signatures. Proceedings of
the National Academy of Sciences, 98, 2001.
[41] C. Yeang, S. Ramaswamy, P. Tamayo, S. Mukherjee
, R. M. Rifkin, M. Angelo, M. Reich, E. Lander,
J. Mesirov, and T. Golub. Molecular classification of
multiple tumor types. Bioinformatics, 1:17, 2001.
[42] F. G. Cozman and I. Cohen. Unlabeled data can degrade
classification performance of generative classifiers
. In Fifteenth International Florida Artificial Intelligence
Society Conference, pages 327331, 2002.
[43] T. Zhang and F. J. Oles. A probability analysis on
the value of unlabeled data for classification problems.
In Proceedings of the International Conference on Machine
Learning, pages 11911198, 2000.
[44] K. Nigam, A. McCallum, S. Thrun, and T. Mitchell.
Text classification from labeled and unlabeled documents
using EM. Machine Learning, 39:103134, 2000.
[45] T. Li, S. Zhu, Q. Li, and M. Ogihara. Gene functional
classification by semi-supervised learning from heterogeneous
data. In Proceedings of the ACM Symposium
on Applied Computing, 2003.
[46] G. R. G. Lanckriet, M. Deng, N. Cristianini, M. I. Jordan
, and W. S. Noble. Kernel-based data fusion and its
application to protein function prediction in yeast. In
Proceedings of the Pacific Symposium on Biocomputing
2004, 2004.
[47] A. Shilton, M. Palaniswami, D. Ralph, and A. C. Tsoi.
Incremental training in support vector machines. In
Proceedings of the International Joint Conference on
Neural Networks, 2001.
[48] C. P. Diehl. Toward Efficient Collaborative Classification
for Distributed Video Surveillance. PhD thesis,
Department of Electrical and Computer Engineering,
Carnegie Mellon University, 2000.
[49] J. H. Maindonald, Y. E. Pittelkow, and S. R. Wilson.
Science and Statistics: A Festschrift for Terry Speed,
volume 40 of IMS Lecture NotesMonograph Series,
chapter Some Considerations for the Design of Microarray
Experiments, pages 367390. Institute of Mathematical
Statistics, 2003.
[50] R. A. Irizarry, B. Hobbs, F. Collin, Y. D. Beazer-Barclay
, K. J. Antonellis, U. Scherf, and T. P. Speed.
Exploration, normalization, and summaries of high
density oligonucleotide array probe level data. Bio-statistics
, 4:249264, 2003.
[51] C. Li and W. H. Wong. Model-based analysis of oligonucleotide
arrays: Expression index computation and outlier
detection. Proceedings of the National Academy of
Science, 98:3136, 2001.
[52] B. M. Bolstad, R. A. Irizarry, M. Astrand, and T. P.
Speed. A comparison of normalization methods for high
density oligonucleotide array data based on bias and
variance. Bioinformatics, 19:185193, 2003.
[53] B. I. P. Rubinstein and T. P. Speed. Detecting
gene expression with oligonucleotide microarrays, 2003.
manuscript in preparation.
[54] W. Liu, R. Mei, D. M. Bartell, X. Di, T. A. Webster,
and T. Ryder. Rank-based algorithms for analysis of
microarrays. Proceedings of SPIE, Microarrays: Optical
Technologies and Informatics, 4266, 2001.
[55] W. M. Liu, R. Mei, X. Di, T. B. Ryder, E. Hubbell,
S. Dee, T. A. Webster, C. A. Harrington, M. H. Ho,
J. Baid, and S. P. Smeekens. Analysis of high density
expression microarrays with signed-rank call algorithms
. Bioinformatics, 18:15931599, 2002.
[56] F. Provost and T. Fawcett. Analysis and visualization
of classifier performance:
Comparison under imprecise
class and cost distributions. In Third International
Conference on Knowledge Discovery and Data Mining,
Menlo Park, CA, 1997.
[57] T. Fawcett, F. Provost, and R. Kohavi. The case
against accuracy estimation for comparing induction algorithms
. In Fifteenth International Conference on Machine
Learning, 1998.
[58] D. Kampa and et al. Novel RNAs identified from a comprehensive
analysis of the transcriptome of human chromosomes
21 and 22. Manuscript in preparation.
[59] W.-M. Liu, X. Di, G. Yang, H. Matsuzaki, J. Huang,
R. Mei, T. B. Ryder, T. A. Webster, S. Dong, G. Liu,
K. W. Jones, G. C. Kennedy, and D. Kulp. Algorithms
for large scale genotyping microarrays. Bioinformatics,
2003. In press.
[60] Affymetrix.
GeneChip
CustomSeq
resequencing
array:
Performance
data
for
base
calling
algorithm
in GeneChip DNA analysis software. Technical
note,
Affymetrix
Inc.,
Santa
Clara,
CA,
2003.
http://www.affymetrix.com/support/tech
nical/technotes/customseq technote.pdf.
SIGKDD Explorations.
Volume 5,Issue 2 - Page 139 | low-level analysis;data mining;transductive learning;learning from heterogeneous data;heterogeneous data;semi-supervised learning;incremental learning;transcript discovery;microarray;gene expression estimation;statistics;genotyping;Low-level microarray analysis;re-sequencing |
135 | Measuring Cohesion of Packages in Ada95 | Ada95 is an object-oriented programming language. Pack-ages are basic program units in Ada 95 to support OO programming, which allow the specification of groups of logically related entities. Thus, the cohesion of a package is mainly about how tightly the entities are encapsulated in the package. This paper discusses the relationships among these entities based on dependence analysis and presents the properties to obtain these dependencies. Based on these, the paper proposes an approach to measure the package cohesion, which satisfies the properties that a good measure should have. | INTRODUCTION
Cohesion is one of the most important software features during its
development. It tells us the tightness among the components of a
software module. The higher the cohesion of a module, the more
understandable, modifiable and maintainable the module is. A
software system should have high cohesion and low coupling.
Researchers have developed several guidelines to measure
cohesion of a module [1, 3, 4]. Since more and more applications
are object-oriented, the approaches to measure cohesion of object-oriented
(OO) programs have become an important research field.
Generally, each object-oriented programming language provides
facilities to support OO features, such as data abstraction,
encapsulation and inheritance. Each object consists of a set of
attributes to represent the states of objects and a set of operations
on attributes. Thus, in OO environment, the cohesion is mainly
about how tightly the attributes and operations are encapsulated.
There are several approaches proposed in literature to measure
OO program cohesion [2, 5, 6, 7, 11, 12]. Most approaches are
based on the interaction between operations and attributes. The
cohesion is measured as the number of the interactions. Generally
only the references from operations to attributes are considered.
And few care about the interactions of attributes to attributes and
operations to operations at the same time. This might lead to bias
when measuring the cohesion of a class. For example, when
designing the trigonometric function lib class, we might set a
global variable to record the temporal result. The variable is
referred in all the operations of the class. According to methods
based on the interaction between operations and attributes [6, 7],
the cohesion is the maximum 1. In fact, there are no relations
among the operations if the calls are not taken into account. In
this view, its cohesion is 0. The difference is caused by
considering only the references from operations to attributes,
while not considering the inter-operation relations.
In our previous work, we have done some research in measuring
OO program cohesion [10, 13, 14]. Our approach overcomes the
limitations of previous class cohesion measures, which consider
only one or two of the three facets. Since the OO mechanisms in
different programming languages are different from each other,
this paper applies our measure to Ada packages.
The remaining sections are organized as follows. Section 2
introduces the package in Ada 95. Section 3 discusses the basic
definitions and properties for our measure. Based on the
definitions and properties, Section 4 proposes approaches to
measure package cohesion. Conclusion remarks are given in the
last section.
PACKAGES IN ADA 95
In Ada 95[ISO95], packages and tagged types are basic program
units to support OO programming. A package allows the
specification of groups of logically related entities. Typically, a
package contains the declaration of a type along with the
declarations of primitive subprograms of the type, which can be
called from outside the package, while its inner workings remain
hidden from outside users. In this paper, we distinguish packages
into four groups.
PG1: Packages that contain any kind of entities except
tagged types.
PG2: Packages that only contain the declaration of one
tagged type along with those primitive subprograms
of the type. There are two subgroups in PG2:
- PG2-1: The type is an original tagged type.
- PG2-2: The type is a derived type.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
SIGAda'03, December 711, 2003, San Diego, California, USA.
Copyright 2003 ACM 1-58113-476-2/03/0012...$5.00.
62
PG3: Combination of PG1 and PG2.
PG4: Generic packages.
After a generic package is instantiated, it belongs to one of the
former three groups. Thus, only cohesion measure of PG1, PG2
and PG3 is discussed in the paper.
DEFINITIONS
In this section, we will present our definitions in the form of PG1.
The cohesion of a package from PG1 is mainly about how tightly
the objects and subprograms are encapsulated in the package. In
this paper, the relationships among objects and subprograms are
defined as three dependencies: inter-object, inter-subprogram and
subprogram-object dependence.
Definition 1 In the package body or a subprogram of the package,
if the definition (modification) of object A uses (refer, but not
modify) object B directly or indirectly, or whether A can been
defined is determined by the state of B, then A depends on B,
denoted by A B.
Generally, if B is used in the condition part of a control statement
(such as if and while), and the definition of A is in the inner
statement of the control statement, the definition of A depends on
B's state.
Definition 2 If object A is referred in subprogram P, P depends
on A, denoted by P A.
Definition 3 There are two types of dependencies between
subprograms: call dependence and potential dependence. If P is
called in M, then M call depends on P, denoted by M P. If the
object A used in M is defined in P, the A used in M depends on
the A defined in P, denoted by M
A
A,
P, where (A, A) is
named as a tag. For each call edge, add a tag (*, *) for unification.
i.e. if P Q, P
*,*
Q.
To obtain these dependencies, we introduce four sets for each
subprogram M:
IN(M) is an object set, each element of which is an object
referred before modifying its value in M;
OUT(M) is an object set, each element of which is an
object modified in M.
DEP_A (M) is a dependence set which represents the
dependencies from the objects referred in M to the objects
defined outside M. Each element has the form <A, B>,
where A and B are objects of the package.
DEP_A_OUT(M) is a dependence set which records the
dependencies from the objects referred in M to the objects
defined outside M when exiting M.
In general, the intermediate results are invisible outside, and an
object might be modified many times in a subprogram. We
introduce DEP_A_OUT to improve the precision. Obviously,
DEP_A_OUT(M)
DEP_A (M).
Property 1 A
IN(M), A OUT(P) M
A
A,
P.
Property 2 <A, B>
DEP_A(M), B OUT(P)
M
B
A,
P.
Property 3 M
B
A,
P,
<B, C>(<B, C> DEP_A_OUT(P),
C
OUT(Q)) M
C
A,
Q.
In our previous work [8, 9], we have proposed methods to analyze
dependencies among statements for Ada programs. And these
dependencies can be easily transformed to the dependencies
proposed in this paper. Due to the space limitation, we do not
discuss them in detail here.
To present our cohesion measure in a united model, we introduce
package dependence graph to describe all types of dependencies.
Definition 4 The package dependence graph (PGDG) of a
package PG is a directed graph, PGDG = <N, E, T>, where N is
the node set and E is the edge set, T is the tag set. N = N
O
N
P
,
N
O
is the object node set, each of which represents a unique
object; N
P
is the subprogram node set, each of which represents a
unique subprogram. PGDG consists of three sub-graphs:
Inter-Object Dependence Graph (OOG), OOG = <N
O
, E
O
>,
where N
O
is the object node set (the name of a node is the
name of the object it represents); E
O
is the edge set, if
A B, then edge <A, B>
E
O
.
Inter-Subprogram Dependence Graph (PPG), PPG = <N
P
,
E
P
, T>, where N
P
is the subprogram node set; E
P
is the
edge set which represents the dependencies between
subprograms; T
(V V) is the tag set, where V is the
union of objects and {*}.
Subprogram-Object Dependence Graph (POG), POG = <N,
E
PO
>, where N is the node set which represents objects
and subprograms; E
PO
is the edge set representing
dependencies between subprograms and objects. If P A,
<P, A>
E
PO.
Example1 shows the package Tri, which contains three objects:
temp, temp1 and temp2, and four subprograms: sin, cos, tg and
ctg. Figure 1 shows the PGDG of the package Tri in Example1
(all the Tags on PPG are (*, *), because there are only call
dependencies in this example. We omit the Tags for
convenience).
Example1: package Tri.
package Tri is
temp, temp1, temp2: real;
function sin (x: real) return real;
function cos (x: real) return real;
function tg (x: real) return real;
function ctg (x: real) return real;
end Tri;
package body Tri is
function sin (x: real) return real is
begin temp:=...; return temp; end sin;
...
63
function tg (x: real) return real is
begin
temp1:=sin(x);temp2:=cos(x);
temp:=temp1/temp2; return temp;
end tg;
...
end Tri;
3.2 Extended Definitions
Since there is no object in the package of PG2, the definitions of
Section 3 can not be applied to these packages directly. Therefore,
this section will extend the definitions of Section 3.1 to a more
general model by the following steps:
For PG1, if there is an embedded package, the package is
taken as an object.
For PG2, take the components of the type as objects of the
package.
Let A, B be object of a type T, M, P primitive subprograms, and
Com1 and Com2 are components of T. Then
A, B (A.Com1 B.Com2) Com1 Com2.
A, P (P A.Com) P Com.
A, B, M, P (M
2
.
,
1
.
Com
B
Com
A
P)
M
2
,
1 Com
Com
P.
For PG3, take the types as objects of the package.
To present our measure in a unified model, we add powers
for different objects.
PW(O) =
others
O
O
PG
Cohesioin
O
O
Cohesion
1
object
type
a
is
))
(
(
object
package
a
is
)
(
where Cohesion (O) is the cohesion of O, PG (O) returns the
package containing O.
MEASURING PACKAGE COHESION
According to the PGDG, this section will propose our method to
measure the package cohesion. In the following discussions, we
assume package PG contains n objects and m subprograms, where
m, n
0.
4.1 Measuring Inter-Object Cohesion
Inter-object cohesion is about the tightness among objects in a
package. To measure this cohesion, for each object A, introduce a
set A_DEP to record the objects on which A depends, i.e.
O_DEP(A) = {B| A B, A
B}.
Let
=
)
(
_
)
(
)
(
_
A
DEP
O
B
B
PW
A
DEP
PW
.
Then, we define the inter-object cohesion as:
=
)
,
_
(
PG
O
O
Cohesion
>
=
=
=
1
1
)
(
_
1
1
)
(
0
0
1
n
n
A
DEP
PW
n
n
A
PW
n
n
i
i
where
1
)
(
_
n
A
DEP
PW
represents the degree on which A
depends on other objects.
If n=0, there is no object in the package, we set it to 0. If n=1,
there is one and only one object in the package, the cohesion is its
power.
4.2 Measuring Subprogram-Object Cohesion
Subprogram-object cohesion is the most important field in
measuring cohesion. Until now, there have been several
approaches proposed in literature, such as Chae's methods [6, 7].
But most approaches are based on the POG. As we have
mentioned above, all these methods describe the object reference
in a simple way and subprograms are connected by the objects
referred. Whether there are related among these subprograms are
not described exactly. Thus, these approaches should be improved
to describe these relations. For completeness, we use Co(Prev) to
represent a previous cohesion measure, which satisfies Briand's
four properties.
For each subprogram P, we introduce another two sets: P_O and
P_O_OUT. Where
P_O(P) records all the objects referred in P.
Figure. 1. PGDG of class Tri
temp
temp1
temp2
(a) OOG
sin
cos
tg
ctg
(c) PPG
sin cos tg ctg
temp temp1 temp2
(b) POG
64
P_O_OUT(P) records the objects referred in P, but these
objects relate to objects referred by other subprogram,
i.e.,
P_O_OUT(P)={A|
B, M (P
A
B ,
M
M
B
A,
P)
A,B '*'}.
Let
=
)
(
_
)
(
_
_
)
(
)
(
)
(
P
O
P
A
i
P
OUT
O
P
A
i
i
i
A
PW
A
PW
P
Then, we define the subprogram-object cohesion as:
=
)
,
_
(
PG
O
P
Cohesion
=
=
=
=
Others
P
Prev
Co
m
m
A
PW
A
PW
n
m
m
i
i
P
O
P
A
i
i
1
i
)
(
_
)
(
)
(
1
1
)
(
)
(
0
0
0
If P_O(P) =
, i.e. no objects are referred in P, we set
)
(P
=0.
If the objects referred in P are not related to other subprograms,
these objects can work as local variables. It decreases the
cohesion to take a local variable for a subprogram as an object for
all subprograms. If there is no object or subprogram in the
package, no subprogram will depend on others. Thus,
0
)
,
_
(
=
PG
O
P
Cohesion
.
4.3 Measuring Inter-Subprogram Cohesion
In the PGDG, although subprograms can be connected by objects,
this is not necessary sure that these subprograms are related. To
measure the inter-subprogram cohesion, we introduce another set
P_DEP(P) = {M| P M} for each P. The inter-subprogram
cohesion Cohesion(P_P, PG) is defined as following:
=
)
,
_
(
PG
P
P
Cohesion
>
=
=
=
1
1
)
(
_
1
1
1
0
0
1
m
m
P
DEP
P
m
m
m
m
i
i
where
1
|
)
(
_
|
m
P
DEP
P
represents the tightness between P and
other subprograms in the package.
If each subprogram depends on all other subprograms,
Cohesion(P_P, PG) = 1.
If all subprograms have no relations with any other subprogram,
Cohesion(P_P, PG) = 0.
4.4 Measuring Package Cohesion
After measuring the three facets independently, we have a
discrete view of the cohesion of a package. We have two ways to
measure the package cohesion:
1) Each measurement works as a field, the package
cohesion is 3-tuple,
Cohesion(PG) = < Cohesion(O_O, PG),
Cohesion(P_O, PG),
Cohesion(P_P, PG)>.
2) Integrate the three facets as a whole
=
)
(PG
Cohesion
=
=
=
Others
PG
Cohesion
k
m
n
PG
P
P
Cohesion
k
m
i
i
i
3
1
)
(
0
,
0
)
,
_
(
*
0
0
where k
(
]
1
,
0
; k
1
, k
2
, k
3
>0, and k
1
+ k
2
+ k
3
=1.
Cohesion
1
(PG) = Cohesion(O_O, PG)
Cohesion
2
(PG) = Cohesion(P_P, PG)
Cohesion
3
(PG) = Cohesion(P_O, PG)
If n=0, m
0, the package cohesion describes only the tightness of
the call relations, thus we introduce a parameter k to constrain it.
For the example shown in Figure 1, the cohesion of Tri describes
as follows:
Cohesion(O_O, Tri)= 1/3
Cohesion(P_O, Tri)=0
Cohesion(P_P, Tri)=1/3
Let k
1
= k
2
= k
3
= 1/3, Co(Prev)=1, then
Cohesion(Tri)= 2/9.
Briand et al. [3, 4] have stated that a good cohesion measure
should be
(1) Non-negative and standardization.
(2) Minimum and maximum.
(3) Monotony.
(4) Cohesion does not increase when combining two
modules.
These properties give a guideline to develop a good cohesion
measure.
According to the definitions, it is easy to prove our measure
satisfies these properties.
4.5 Cohesion for PG2-2
In the hierarchies of types, the derived type inherits the
components and primitive subprograms of the super types.
Generally, inheritance will increase the coupling and decrease the
65
cohesion. For the package from PG2-2, we will discuss its
cohesion in four cases:
Case 1: Take the package independently.
Case 2: Take all the primitive subprograms and components
(contains those from super type) into consideration.
Case 3: If the primitive subprograms of the derived type might
access the components (or subprogram) of the super type, take
these components (or subprogram) as those of the derived type.
Case 4: Take the super type as an object of the derived type.
The shortcoming of Case 1 is that: It only measures the cohesion
of the additional components and primitive subprograms of the
derived type, not the complete type.
The primitive subprograms in the super type can not access the
components of the derived type except dispatched subprograms.
Consequently, in Case 2 or 3, the deeper the hierarchy of types is,
the smaller the cohesion. And it is hard to design a package which
cohesion is big enough.
Although we present four cases in this section, none is good
enough to describe the cohesion for a package from PG2-2. To
measure the cohesion of a derived type, much more aspects
should be considered.
RELATED WORKS
There have several methods proposed in literatures to measure
class cohesion. This section gives a brief review of these methods.
(1) Chidamber's LCOM1
[0,
2
)
1
(
m
m
], it measures the
cohesion by similar methods and non-similar methods. It is a
reverse cohesion measure. The bigger the measure, the lower the
cohesion.
(2) The PPG in Hitz's LCOM2 is represented by an undirected
graph. LCOM2 is the number of sub-graphs connected. When
there is one and only one sub-graph, he introduces connectivity to
distinguish them.
(3) Briand's RCI is the ratio of the number of edges on POG to
the max interaction between subprograms and objects.
(4) Henderson's LCOM3 can be described as follows.
)
(
3 C
LCOM
=
m
m
A
n
n
j
j
=
1
|
)
(
|
1
1
where
(A)= {M| AP_O(M)}, A is attribute and M is method.
(5) Chae's CO [6] introduces glue methods, and Xu-Zhou's CO
[13] introduces cut set (glue edges) to analyze the interact pattern.
These two measures are more rational than other measures.
From the introductions above, we can see that
All these methods consider the attribute reference in a
simple way. Whether the methods are related or not are
not described exactly.
LCOM1, LCOM2 and LCOM3 are non-standard, because
their up-bounds are related to the number of methods in
the class. LCOM1 is non-monotonous. The measuring
results might be inconsistent with intuition in some cases
RCI has the basic four properties proposed by Briand. But
it does not consider the patterns of the interactions among
its members, neither LCOM1 and LCOM2 nor LCOM3.
Chae's CO overcomes most limitations of previous
measures. But it is non-monotonous [13]. Xu-Zhou's CO
improves Chae's cohesion measure, and makes its result
more consistent with intuition. The chief disadvantage of
both measures is that they can be applied to connected
POG; otherwise the result will always be 0.
LCOM1 and LCOM2 measure the cohesion among
methods in a class. We can improve the similar function
using the dependencies among methods proposed in this
paper.
LCOM3, Chae and Xu-Zhou's CO measure the cohesion
among methods and attributes in a class. In this paper we
improve them by introducing
)
(M
for each method M.
CONCLUSION
This paper proposes an approach to measure the cohesion of a
package based on dependence analysis. In this method, we
discussed the tightness of a package from the three facets: inter-object
, subprogram-object and inter-subprogram. These three
facets can be used to measure the package cohesion independently
and can also be integrated as a whole. Our approach overcomes
the limitations of previous class cohesion measures, which
consider only one or two of the three facets. Thus, our measure is
more consistent with the intuition. In the future work, we will
verify and improve our measure by experiment analysis
When measuring package cohesion, the following should be paid
attentions.
(1) In the hierarchies of types, the primitive subprograms of
super type might access the objects of the derived type by
dispatching. Therefore, when measuring the cohesion of
PG2, it is hard to determine whether the accession of
derived typed is considered or not.
(2) We can determine polymorphic calls in an application
system. However it is impossible for a package, which
can be reused in many systems.
(3) How to deal with some special subprograms, such as
access subprograms, since such subprograms can access
some special objects in the package.
(4) How to apply the domain knowledge to cohesion
measure.
In all, if a package can be applied to many applications, the
cohesion is mainly about itself without considering the
application environments. Otherwise, it is the cohesion in the
special environments.
66
ACKNOWLEDGMENTS
This work was supported in part by the National Natural Science
Foundation of China (NSFC) (60073012), National Grand
Fundamental Research 973 Program of China (G1999032701),
and National Research Foundation for the Doctoral Program of
Higher Education of China (20020286004).
REFERENCES
[1]
Allen, E.B., Khoshgoftaar, T.M. Measuring Coupling and
Cohesion: An Information-Theory Approach. in Proceedings
of the Sixth International Software Metrics Symposium.
Florida USA, IEEE CS Press, 1999, 119-127.
[2]
Bansiya, J.L., et al. A Class Cohesion Metric for Object-oriented
Designs. Journal of Object-oriented Programming,
1999, 11(8): 47-52.
[3]
Briand, L.C., Morasca, S., Basili, V.R. Property-Based
Software Engineering Measurement. IEEE Trans. Software
Engineering, Jan. 1996, 22(1): 68-85.
[4]
Briand, L.C., Daly, J., Wuest, J. A Unified Framework for
Cohesion Measurement in Object-Oriented Systems.
Empirical Software Engineering, 1998, 3(1): 65-117.
[5]
Briand, L.C., Morasca, S., Basili, V.R. Defining and
Validating Measures for Object-Based High-Level Design.
IEEE Trans. Software Engineering, 1999, 25(5): 722-743.
[6]
Chae, H.S., Kwon, Y.R., Bae, D.H. A Cohesion Measure for
Object-Oriented Classes. Software Practice &
Experience, 2000, 30(12): 1405-1431.
[7]
Chae, H.S., Kwon, Y.R. A Cohesion Measure for Classes in
Object-Oriented Systems. in Proceedings of the Fifth
International Software Metrics Symposium. Bethesda, MD
USA, 1998, IEEE CS Press, 158-166.
[8]
Chen, Z., Xu, B., Yang, H. Slicing Tagged Objects in Ada
95. in Proceedings of AdaEurope'2001, LNCS 2043: 100-112
.
[9]
Chen, Z., Xu, B., Yang, H., Zhao, J. Static Dependency
Analysis for Concurrent Ada 95 Programs. in Proceedings of
AdaEurope 2002, LNCS 2361, 219-230.
[10]
Chen, Z., Xu, B. Zhou, Y., Zhao, J., Yang, H. A Novel
Approach to Measuring Class Cohesion Based on
Dependence Analysis. in Proceedings of ICSM 2002, IEEE
CS Press, 377-383
[11]
Chidamber, S.R., Kemerer, C.F. A Metrics Suite for Object-Oriented
Design. IEEE Trans. Software Engineering, 1994,
20(6): 476-493.
[12]
Hitz, M., Montazeri, B. Measuring Coupling and Cohesion
in Object-Oriented Systems. in Proceedings of International
Symposium on Applied Corporate Computing, Monterrey,
Mexico, October 1995: 25-27.
[13]
Xu, B., Zhou, Y. Comments on A cohesion measure for
object-oriented classes. Software Practice &
Experience, 2001, 31(14): 1381-1388.
[14]
Zhou, Y., Guan, Y., Xu, B. On Revising Chae's Cohesion
Measure for Classes. J. Software. 2001, 12(Suppl.): 295-300
(in Chinese)
67 | Object-Oriented;Ada95;cohesion;dependence;Measurement;OO programming;measure;Cohesion |
136 | Measurement of e-Government Impact: Existing practices and shortcomings | Public administrations of all over the world invest an enormous amount of resources in e-government. How the success of e-government can be measured is often not clear. E-government involves many aspects of public administration ranging from introducing new technology to business process (re-)engineering. The measurement of the effectiveness of e-government is a complicated endeavor. In this paper current practices of e-government measurement are evaluated. A number of limitations of current measurement instruments are identified. Measurement focuses predominantly on the front (primarily counting the number of services offered) and not on the back-office processes. Interpretation of measures is difficult as all existing measurement instruments lack a framework depicting the relationships between the indicators and the use of resources. The different measures may fit the aim of the owners of the e-governmental services, however, due to conflicting aims and priorities little agreement exists on a uniform set of measures, needed for comparison of e-government development. Traditional methods of measuring e-government impact and resource usage fall short of the richness of data required for the effective evaluation of e-government strategies. | INTRODUCTION
Public institutions as well as business organizations use the
Internet to deliver a wide range of information and services at an
increasing level of sophistication [24]. However, Web sites and
related business processes and information systems are so
complex that it is difficult for governments to determine adequate
measures for evaluating the efficiency and effectiveness of the
spending of their public money. Moreover only measuring the
front of public websites is a too narrow view on e-government. E-government
involves the collaboration and communication
between stakeholders and integration of cross-agency business
processes.
An important aim of having a well-funded theory on measuring e-government
is to allow comparison or benchmarking. By
examining the results of such benchmarks we might be able to
distinct good from bad practices and to give directives to
designers of e-governmental services. Moreover is should help to
identify how effective public money is spend. Thus evaluating the
relationship between results and resources used.
Comparison can have different goals. Selecting between options
is just one of them. Principally we can distinct three types of
comparison: 1) comparison of alternatives (e.g. to make a choice
between alternative solutions), 2) vertical comparison over time
(e.g. in order to measure improvement of versions) and 3)
horizontal comparison (e.g. benchmarking different solutions).
Whatever type of comparison we choose, we can only compare if
we have a set of preferred outcomes. Many measurement
instruments currently applied are not described in such way that
the preferences underlying the instrument have been made
explicitly. In this research we describe different measurement
instruments used and we will discuss their strengths and
weaknesses.
The goal of this paper is to evaluate how current measurement
instruments the impact of e-government. The combined research
methodology of literature research and case study was chosen to
answer the goal of this research. Case study research can be
characterized as qualitative and observatory, using predefined
research questions [36]. Case studies were used to examine
existing measurement instruments in the Netherlands. The e-government
monitor of Accenture and a European Union
measurement instrument was also evaluated as thes instruments
are used by Dutch agencies to benchmark their situation.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
ICEC'04, Sixth International Conference on Electronic Commerce
Edited by: Marijn Janssen, Henk G. Sol, and Ren W. Wagenaar
Copyright 2004 ACM 1-58113-930-6/04/10...$5.00
481
THE NATURE OF E-GOVERNMENT
Before analyzing instruments developed to measure e-government
development, it is necessary to understand the nature of e-government
. The organization of public administration involves
many, heterogeneous types of business processes. When law or
regulations are changed these processes and their supportive
systems have to be adapted. Within one policy layer the process
of adaptation starts with legislation drafting (adapting the law or
regulations) followed by a chain of processes varying from
translating these law texts into specifications, design of processes
and supporting systems, development of these processes and
systems and finally implementation and use (for a recent `legal
engineering' approach see [34]). A complicating factor is that
more than one governmental layer exists and often interaction
between these layers occurs. Especially the need to adapt
legislation from the European Union is a more then ever dominant
factor that even further complicates this process is.
In Figure 1 the fragmented nature of the government is shown.
Legislation and service provisioning efforts are distributed over
the European, State, Region and local level. The situation is even
more complicated as within each level many agencies of various
types exists. At a local level municipalities, water boards,
chambers of commerce, local taxes and many other public and
public-private agencies exists. As such many agencies influence
the impact of e-government and measurement should include
them all.
Europe
Europe
Europe
Europe
Europe
state
Europe
region
Europe
Europe
local
int
e
r
oper
ata
b
ility
int
e
roper
a
t
ability
int
e
r
opera
t
a
bility
policy and standardization
navigation, service provisioning,
helpdesk and appeal
Aggregation of local initaitves,
navigagtion, geogprahical relevance
standardization, policies, culture,
enforcement
businesses
demand
?
?
?
?
Figure 1: Fragmented nature of public administration
Two main types of interactions influence this fragmented
landscape, the policy-making, implementation, execution and
enforcement of new legislation and the business and citizens
searching for information and services.
We will denote the creation, implementation, execution,
enforcement and maintenance of laws as production cycle in this
paper. Governments are looking for ways to increase their
efficiency, effectiveness, decrease the administrative burden and
reduce the lead times for adopting new legislations. The
consequences of new laws at production phase (drafting) are only
roughly known. Only after implementing the new regulations in
the processes and supporting systems the full meaning of applying
these regulations becomes clear. Certainly when the
interpretations and translation into practical applications take
place at local government level it often will not be possible to
timely inform the businesses or citizens, who are affected by the
new law, pro-actively, as no information about concrete effects on
their cases is available. A complicating factor is that many times a
administrative process is supported by heterogeneous information
systems and many different parties can be involved in adapting
those systems.
Most of the companies' ERP software components need to be
updated quite frequently to be in, and keep being in accordance
with small changes in administrative legislation. The same holds
for Human Resources software, Time reporting software, Tax
reporting software and, more indirectly, Logistics software. All
have to be updated due to changes in legislation. It does not
require extensive explanation to stress the need for smart public-private
networks from a production chain perspective.
Nowadays businesses expect that the governments reduce the
administrative burden for businesses. Governments can achieve
this goal by creating a smart, service oriented, public
administration. To be able to provide these integrated services
access to different legal sources or better to the formal models
that could be derived from those models is needed (see e.g. [7]).
Standardization initiatives like the Metalex standard for
describing legal sources (see
www.metalex.nl
) and the Power
method for modeling normative systems (see
www.belastingdienst.nl/epower
) are essential first steps towards
this. They provide a basis for interoperable and contextual better
understandable and accessible legal sources that could easier be
connected to the context of business activities.
From the demand perspective, citizens and businesses find it very
hard to access relevant legislation, local procedures and rules,
policy documents etc. Governmental bodies are engaged in a
flurry of policy and law making activities. Not only is this a
complex myriad of legal issues, but the information is produced at
different levels of public administration, including local, regional,
national and European union. A commonly accepted requirement
however is that online state legislative information is equally
accessible to all (Fage & Fagan, 2004) and of course in the first
place it should be accessible. Many governments currently are
searching for ways to make their information accessible and
retrievable. This involves issues regarding terminology,
explaining the type of legislative document, understandable and
easy-to-use search interfaces and accessing the official status of
online documents.
A central question for researchers working in the field of e-government
is how to measure the progress made in complying to
the requirements mentioned before. In this paper we examine
some examples of measurement instruments that were developed
to measure progress in e-Government. But before describing these
instruments we will discuss some literature on measuring eGovernment
first.
LITERATURE REVIEW
There is a diversity of literature focusing on measurements. Stage
models are often used to positioning and evaluate the current
status of e-government development. Services literature focusing
482
on the measurement to of perceived quality, satisfaction of
complex, multi-service organizations. Last there is the research
focusing on developing suitable `yardstick', performance
indicators acting as surrogates for measuring the performance.
3.1 Stage models
Many researchers on e-business address the stages of Web site
development. Green [17] suggests three stages: attracting,
transforming, and utilization of media technology. Fink et al.
(2001) focuses on attracting, enhancing and retaining client
relationships using the Web site applications. Moon [27]proposes
a five stage model. Layne and Lee [23] propose four stages of a
growth model towards e-government. In stage one, governments
create a `state website' by establishing a separate Internet
department. There is no integration between the processes in the
front and back offices. The web sites are focused on creating a
web-presence and providing information. Transaction possibilities
for customers are restricted to printing online forms and sending
them by traditional mail. At stage two, transaction, there is two-way
communication. Customers transact with government on-line
by filling out forms and government responds by providing
confirmations, receipts, etc. The number of transactions is
relatively small and the organization becomes confronted with
translating information from and back to the front office and the
other way around. Often a working database is established in the
front office for supporting immediate transactions. The data in
this database is periodically derived from and exported to the
various databases in the back office. At stage three, vertical
integration, the focus is moving towards transformation of
government services, rather than automating and digitizing
existing processes. Most information systems are localized and
fragmented. Government organizations often maintain separate
databases that are not connected to other governmental agencies
at the same level or with similar agencies at the local or federal
level. A natural progression according to Layne and Lee is the
integration of scattered information systems at different levels
(vertical) of government services. Layne and Lee (2001) expect
that vertical integration within the similar functional walls but
across different levels of government will happen first, because
the gap between levels of government is much less than the
difference between different functions. Information systems are
connected to each other or, at least, can communicate with each
other.
The problem with the description of governmental systems is that
they don't make a distinction between the (legal and
administrative `knowledge' contained in those systems and the
data (of citizens) to which that knowledge is applied (e.g. to
derive if someone is entitled to receive a subsidy). Especially the
sharing of data is limited and for very good reasons too! Both the
desire to guarantee a certain level of privacy and the vulnerability
for misuse of data have been reasons for the legislator to limit
storage, reusing and sharing data between different agencies (not
to speak of passing data of citizens from the government to
private institutions).
The challenge consequently is how to realize the full potential of
information technology, from the customer's perspective, since
this can only be achieved by horizontally integrating government
services across different functional walls (`silos') in Layne and
Lee's stage four, horizontal integration. The question is how to
achieve this without the need for having databases across different
functional areas to communicate with each. The situation that
information obtained by one agency can be used for CRM
purposes by other agencies by sharing information is for reasons
mentioned earlier, undesirable. The knowledge contained in these
information systems however can be shared and reused thus
allowing better integrated government services.
3.2 Service literature
The concepts of perceived quality and satisfaction are two of the
fundamental pillars of recent evaluation studies. Bign et al. [5]
conclude that in most cases the fundamental unit of analysis has
been an isolated service, and the fact that several services may be
offered by an individual organization has not been taken into
account. Indeed multi-service organizations, where the customer
has access to several services, have not been so intensively dealt
with. The problems facing these organizations in terms of
measurement and management of perceived quality and of
satisfaction are more complex than in those organizations where
only one service is offered, but less complex then situations where
a service has to be assembled from many other service suppliers.
When measuring the quality of such integrated service it is
necessary to take into consideration not only the perceived quality
of each of the elementary services, but also the perceived overall
quality of the constituting ones.
Bign et al. [5] found that the scale used to determine the
perceived quality of the core services of hospitals, and
universities, is composed of five dimensions, as proposed by
Parasuraman et al. [28]: tangibility, reliability, responsiveness,
confidence and empathy.
3.3 Performance indicators
Performance indicators serve as surrogates to deduce the quality
of the e-government performance [20]. Lee [24] provides
measurements based on development states and a modification of
Simeon's [32]components.
(1) The following five items determine the affect (Attracting) of a
homepage on the Web site:
1. Design of logo and tagline (quick summary of what a
Web site is all about).
2. Graphics (e.g. layout, color and figures of a homepage).
3. Institution's
self-advertising (e.g. banner, button, and
interstitials).
4. Services for attracting (e.g. quiz, lottery, e-card, maps,
weather, channels, download service).C
5. Contents for attracting (e.g. entertainments, culture,
tourism, game, kids, health, gallery).
(2) Informing consists of nine items developed by modifying
Simeon's (1999) components: local links, contents for publicity,
contents for learning, reports, descriptions on the institution,
descriptions on online administrative services, projects, contact
information and counseling.
(3) Community consists of ten items: online forum, events, partner
links (or ads), e-Magazine (or newsletter or Webcast), message
boards, users' participation (e.g. articles, photos, personal links),
focus of news, vision (or values), domain identity and community
services (or online support for community meeting or
networking). A good example of the latter is the ``Citizen
discussion room'' of Ulsan City (eulsan.go.kr).
483
(4) Delivering as a variable is determined by the presence or
absence of features like: search engine, mailing list, framework,
multimedia, password system, FAQ, chat, downloadable
publications and update indication.
(5) Innovation. Public institutions have to utilize the Internet for
actual service innovation. Hence, two variables indicating
innovation results are selected: transformation level of existing
services and frequency of new innovative services. These are each
rated on a five-point scale: ``(1) never; (2) only descriptions; (3)
online request; (4) partial; (5) full processing'' for the first item
and'' (1) never to (5) many new systems'', for the second item.
Such quantification is possible because the introduction of new
innovative systems on public sector Web sites is growing, for
example, Docket Access, View Property Assessments, and
Request for Proposals of Philadelphia (phila.gov) and Citizen
Assessment Systems, Citizen Satisfaction Monitor, Online
Procedures Enhancement (OPEN) System of Seoul
(metro.seoul.kr).
These measures focus mainly on components visible to users and
do not take into account back-office components like integration.
Van der Merwe and Bekker [26] classify website evaluation
criteria in 5 groups as shown in table1. Many of their criteria
seem to be inspired by their e-Commerce orientation but many of
the criteria will be applicable to e-Government as well.
Table 1: Web site evaluation criteria groups according to Merwe and Bekker [26]
Phase
Criteria group
This criteria group evaluates/measures
Interface
Graphic design principles
The effective use of color, text, backgrounds, and other general graphic
design principles
Graphics and multimedia
The effectiveness of the graphics and multimedia used on the site
Style and text
Whether or not the text is concise and relevant, and the style good
Flexibility and compatibility
The degree to which the interface is designed to handle exceptions, for
example, text-only versions of pages
Navigation
Logical structure
The organization and menu system of the site
Ease of use
The ease of navigation to find the pages that the user is looking for
Search engine
The search engine's ability to find the correct pages easily and provide
clear descriptions of the search results
Navigational necessities
Other important aspects of navigation like the absence of broken links
and ``under-construction'' pages
Content
Product/service-related information
Whether or not the products/services are described precisely and
thoroughly
Agency and contact
Information
Whether or not it is easy to find information on the company, its
employees and its principals
Information quality
The currency and relevance of the content on the site
Interactivity
How much input the user has on the content displayed on the site
Reliability
Stored customer profile
The registering process and how the company uses the stored customer
profile
Order process
The effectiveness and ease of use of the online order process
After-order to order receipt
The company's actions from order placement until the order is delivered
Customer service
How the company communicates and helps its online customers
Technical
Speed
Different aspects of the loading speed of the site
Security
Security systems and the ways used by the company to protect
customers' privacy on the site
Software and database
Flexibility in terms of different software used. Also looks at the data software and data
communication systems used on the site
System design
The correct functioning of the site and how well it integrates with
internal and external systems
MEASUREMENTS INSTRUMENTS FOUND IN PRACTICE
Hazlett and Hill discuss [19] discuss the current level of
government measurement. Huang and Chao (2001) note that
while the development and management of web sites are
becoming essential elements of public sector management, little is
known about their effectiveness. Indeed, Kaylor et al. (2001) note
that research into the effectiveness of e-Government efforts tends
to concentrate on content analysis or measures of usage. These
may not be wholly appropriate metrics with which to gauge
success. Aspects of service relevant in this context may,
according to Voss (2000) include: consumer perceptions of
security and levels of trust; response times (bearing in mind that
Internet consumers may well be accustomed to quick responses);
navigability of the Web site; download time; fulfillment of service
promised; timely updating of information; site effectiveness and
functionality. Reinforcing a point made above, Voss (2000) takes
the view that e-service channels should be regarded as providing
484
support for front-line service delivery and not replacements for
front-line service. However, such channels do enable change in
the nature of front-line service delivery and of human
involvement in service.
4.1 OVERHEID.NL
The Dutch Government has recently published "Overheid.nl
Monitor 2003", its fifth annual e-government progress report.
While highlighting a number of encouraging developments, the
report concludes that much remains to be done in areas such as
user-friendliness, transactional services and e-democracy.
Overheid.nl focused on all government agencies, and mentions
the following agencies explicitly.
Municipalities
Ministries
Provinces
Water boards
A screenshot of the online monitor of the measurement of
municipalities is shown in the figure below.
Figure 2: Screenshot of Overheid.nl
"Overheid.nl Monitor 2003: developments in electronic
government" is based on a periodical large-scale survey of
government websites, which was carried out in October 2003 by
Advies Overheid.nl on behalf of the Dutch Ministry of the Interior
and Kingdom Relations. The survey assessed 1,124 government
websites according to five criteria: user-friendliness, general
information, government information, government services, and
scope for participation (interactive policy making). The website
user friendliness measurement etc, is very thorough in this survey.
The e-service measurement is less well defined. The services
investigated for the survey are listed clearly for several layers of
government but they seem to be limited to the so called `Dutch
service product catalogue', set of typical municipal products and
services.
Figure 3: Sample listing of services measured and ways
of accessing them investigated
Additionally, researchers measured the e-mail response time of
government websites and assessed user satisfaction via a survey
of 3,000 users. The report states that, although e-government
services are developing on schedule and are becoming more
sophisticated, there is still much room for improvement.
On the positive side, the report finds that:
E-government is developing on schedule. The 2003 target of
providing 35% of government services electronically was
widely achieved, with 39% of services for citizens and 44% of
services for businesses e-enabled by October 2003.
However, the report also identifies a number of shortcomings and
areas where improvement is needed:
Practically no full electronic transactions are available. In this
respect, the report considers that development of such services
will depend both on future solutions for user identification and
authentication and on back-office re-engineering.
Although the use of e-services is growing, the development of e-government
is still mainly supply-driven and the penetration of
government websites remains unknown. "Only if we assess the
penetration of government websites and the level of their use can
we take a truly demand-driven approach", the report says.
The items related to municipalities are connected to the functionalities
within an implemented product and services catalogue.
D1
Is a product catalogue or another form of systematically
offered services been used?
D2
-if so, does at least contain 150 products?
D3
-if so, does is at least contain 50 forms?
i.e.: can one request for or comment on at least 50 product
by using a form that is contained in the product catalogue
which can be filled in, printed and send in by the users ...?
D4
-if so, can these be accessed per life event?
D5
-if so, can these be accessed per theme?
D6
-if so, can these be accessed by using a lexicon (a-z)?
D7
-if so, does it contain a specific search function? (fill in the
search term)
Besides this, four commonly municipal products are mentioned that can
be supplied more or less in a digital form.
(choices: no info; info; down loadable form; up loadable
form; transaction)
D8a
Request for building permission
D8b
Request for Cutting trees permission
D8c
Request for extract from GBA
D8d
Report of change of address / removal (no transaction
possible)
485
Users' satisfaction with e-government services is still
significantly lower than with services delivered through
traditional channels.
E-democracy tools and citizen engagement through electronic
means remain embryonic. According to the report, this is due not
only to a lack of demand but also to a poorly organized supply
side, with inadequate moderation, unappealing consultation
subjects and missing follow-up.
In addition to identifying progress accomplished and remaining
issues, the report makes a number of recommendations that
should help reach the objectives of the Andere Overheid ("Other
Government") action programme, presented in December 2003.
Such recommendations include the following:
E-government services must become more user-friendly and
easier to find. Metadata standards should be defined to make
them easier to find through search engines. FAQs and lists of
the most searched terms and documents should also be made
more widely available.
E-government services must be permanently improved: even once
all government websites are fully functional, government should
still constantly aim to improve e-government and consult target
groups about new services they might require, says the report.
E-government must be further developed through service
integration across government bodies, which is currently still in
its infancy. According to the report, the Dutch supply-driven
approach has so far sought solutions within the limits and
administrative competencies of single bodies.
Emphasis must be shifted from the breadth of services to their
depth. Rather than aiming to run every electronic product and
service conceivable, government bodies should aim to integrate
services as deeply as possible, especially those in frequent and
popular demand, the report says. This implies developing
seamless links from the front to the back office and fostering a
more customer-minded culture.
From the perspective of the policymakers we may conclude that
the benchmark takes into account individual agencies and their
websites, number of services and to a certain degree also service
levels, but the aim is to integrate horizontally, something which is
not measured by
www.overheid.nl
.
4.2 WEBDAM.NL
In the year 2000 the ministry of interior decided that all
municipalities should be online by 2002 and 25% of the services
provisioning should be support by websites. In order to help
municipalities to achieve this, the Webdam project was started in
March 2000 aiming at stimulating municipalities to develop
websites. These websites should deliver better and improved
services provisioning over the Internet as citizens expected.
One of the activities in the Webdam project has been the
development of a website that allowed municipalities to share
knowledge and make better use of the Internet. To further
stimulate municipalities Webdam started a Top 50 for
municipalities' websites, using criteria such as design, content,
service level and communication. Assessment and ranking of each
municipality is performed by representatives coming from three
groups; civil servants, citizens and experts.
Webdam uses a panel of experts to judge the public agencies' web
pages. The stakeholders include the following groups:
1.
Webdam employees (experts)
2. Public servants municipality under study
3. Citizens
These stakeholders judges the web pages based on fie main
groups
1. Layout,
2. Content,
3. Communication,
4. services
and,
5.
plus/minus remarks
Each group has a minimum and maximum score. The total is
aggregated to determine a ranking. Who determines score?
Figure 4: Screenshot of webdam
Webdam focuses exclusively on the front-office, the aspects
directly visibility to the citizens using the web pages. No
connection to the size of the municipality, the number of citizens
and other expect influencing the available resources of the
municipality. is made
4.3 Accenture e-gov monitor
The yearly research conduct by Accenture [1][2][3] has a
profound influence on governments. An increase of decrease in
ranking of this report results in discussions about the future of e-government
.
Accenture researchers in each of the 23 selected countries
described the typical services that national government should
provide using the Internet in order to fulfill the obvious needs of
citizens and businesses. They accessed and assessed the websites
of national government agencies to determine the quality and
maturity of services, and the level at which business can be
conducted electronically with government.
In total, 169 national government services across nine major
service sectors were investigated by Accenture during a two
weeks lasting study (!)
in 2002 using the web in 23 countries. The
nine service sectors researched were Human Services, Justice &
Public Safety, Revenue, Defence, Education, Transport & Motor
Vehicles, Regulation & Democracy, Procurement and Postal. The
main "indicator" of the eGovernment level chosen by Accenture
486
is what they call: service maturity. Service maturity is an
indication for the level to which a government has developed an
online presence. It takes into account the numbers of services for
which national governments are responsible that are available
online (Service Maturity Breadth), and the level of completeness
with which each service is offered (Service Maturity Depth).
Service Maturity Overall is the product of Service Maturity
Breadth and Service Maturity Depth.
Service maturity is decomposed into the following aspects:
Publish - Passive/Passive Relationship. The user does not
communicate electronically with the government agency and the
agency does not communicate (other than through what is
published on the website) with the user.
Interact - Active/Passive Interaction. The user must be able to
communicate electronically with the government agency, but the
agency does not necessarily communicate with the user.
Transact - Active/Active Interaction. The user must be able to
communicate electronically with the government agency, and the
agency must be able to respond electronically to the user. the
degree to which the services are organized around the citizen, as
opposed to around internal government structures.
In 2004 Accenture again investigated 12 service sectors and 206
services in yet again two weeks. They were: agriculture; defence;
e-Democracy; education; human services; immigration, justice
and security; postal; procurement; regulation; participation;
revenue and customs; and transport.
Little is said by Accenture about the metrics involved. They have
performed the survey for five years now and the perspective
chosen is that of a citizen accessing a government service using
on line means. For this article it is interesting to note the final
remarks in the 2004 report: governments are at the end of what
can be achieved with traditional methods; they are developing
strategies to cope with horizontal integration between agencies.
4.4 Regional innovation scorecard (ris)
One of the ambitions of the EU is to become the most competitive
and dynamic knowledge-based economy of the world (Lissabon
agenda). To measure this, the European Regional Innovation
Scorecard (RIS), a scorecard used for monitoring and comparing
the innovation in regions, has been developed [13]. The scorecard
is seen as acknowledged instrument to compare regions in their
ability to foster economic growth. The Largest province of The
Netherlands, The Province of South Holland, explicitly states on
their website: "The province of Noord-Brabant ranks third on the
European regional innovation scoreboard. Zuid-Holland will have
to make a considerable effort in the coming years if it is to reach
the top-20". They also state that "the scoreboard is regarded as
extremely relevant because it is generally accepted as the a
leading European benchmark for innovation dynamics" [30].
Surprisingly the same regional authority does not pay any
attention to the contribution of their own eGovernment services to
that level of innovation and economic growth. There is no
mentioning of eGovernment or government services or anything
close to it in the whole policy documents related to innovation at
this Province. This becomes less surprising when the indicators of
the RIS and the EIS are viewed more closely. The RIS uses the
following indicators.
(1) population with tertiary education
(2) lifelong learning,
(3) employment in medium/high-tech manufacturing,
(4) employment in high-tech services,
(5) public R&D expenditures,
(6) business R&D expenditures,
(7) EPO high-tech patent applications,
(8) all EPO patent applications, and
(9) five indicators using unpublished CIS-2 data
(10) the share of innovative enterprises in manufacturing and
services innovation expenditures as a percentage of
turnover in both
(11) manufacturing and
(12) services,
(13) the share of sales of new-to-the-firm products in
manufacturing.
These indicators are based on Eurostat exercises [13]. From this
analysis it becomes obvious that this Province considers
innovation as a development process "outside" the government
and its own performance. The basic assumption made by the
Dutch Provinces is that governments can stimulate innovation in
the economy without being part of the regional economy. The
main political driver for efficient eGovernment is economic
growth and jobs and the main driver for economic growth is
considered to be innovation. The metrics of the benchmarks do no
coincide.
EVALUATION
The evaluation instruments described before are just examples of
the overwhelming number measurement instruments currently
used. Table 2 summarizes the described instruments. Although we
focused on a limited number of instruments these instruments are
very likely representative for the other measurement instruments.
The following observations can be made about the measurement
instruments:
Most instruments focus on the performance of a single
agency;
Measurement focus on front, which is directly visible,
and not on the business process and information system
in the back. This is surprisingly as these aspects
influence the performance to a large extend;
Short-term focus, not many indicators are focused on
measuring the long-term performance;
Interpretation of measures is difficult as all existing
measurement instruments lack a framework depicting
the relationships between the indicators and the use of
resources. Different stakeholders may come to different
interpretations of the status of e-government.
From a theoretical point of view we conclude after examining
many other existing instruments that these instruments lack a
487
clear connection to any theoretical framework on e-Government
and a well-described set of preferences that can be used for
comparison. Even if we would consider that these measurements
instruments were developed independent of each other it is
astonishing that these instruments show that little overlap both in
features as in measurement method.
Table 2: Summary of measurement instruments studied
Governmental performance is dependent on a complex of
interlinked processes and dependencies between these processes,
the stakeholders involved including civil servants and their
departments. The legal and political context which is very
dominant in a governmental setting furthermore increases
complexity. Sometimes obvious improvements in a service
provision chain may be blocked because data may not be shared
due to data protection regulations. The system of checks an
balances that is fundamental to governments' functioning and
essential for maintaining citizens' trust in the government can be
troublesome if we want to redesign inefficient processes. A
combination of factors such as the volume of regulations and the
lack of understanding of their original aims, the lack of formal
process models that could help to get insights in the dependencies
between processes and explain how the legal requirements are
translated into process requirements and the lack of formally
described legal models, don't really help if we want to explicitly
formulate the criteria that determine e-Government success. These
criteria determine e-Government success (or failure) are exactly
the ones that should be the ones in our measurement instruments.
But even if we would have had a better theory on performance of
e-Government processes and we would have had well funded
measurement instruments, interpretation of the outcomes of
applying those instruments would be problematic, especially
within the political context within these instruments are generally
used. Bruin [10] showed that when the distance between the
interpreters and providers of information is bigger, it is more
difficult to interpret information.
Politicians do not always steer on rational grounds but suppose
they would then their control system (or policy making process
see figure 5) would include a comparison and control function.
We stated before that comparison is based upon a set of
preferences. Public services can consequently be evaluated using
competing norms and values [18]. A court for example might be
asked to deal with cases as efficiently as possible, to maximize
the number of cases dealt with, within a certain time period on the
one hand, while on the other hand the sentence should be
carefully made, funded with the right arguments and
understandable. Performance measurement instruments that lack
an explicit set of preferences (or norms for comparison) might
give a wrong view on reality if looked at with other preferences in
mind.
CONCLUSIONS AND FURTHER RESEARCH
We investigate the current e-government measurement practice in
the Netherlands and investigated some theoretical work in this
field. Our analyzes shows a messy picture on the measurement of
e-government. Many measurement instruments take a too
simplistic view and focus on measuring what is easy to measure.
Many of the instruments focus on measuring the visible front of e-government
and ignore the performance of the cross-agency
business processes. None of the instruments focus on measuring
multi-service organizations. The instruments focus on one (type
of) agency and do not provide an overall picture.
Interpretation of measures is difficult as all existing measurement
instruments lack a framework depicting the relationships between
the indicators and the use of resources. The different measures
may fit the aim of the owners of the e-governmental services,
however, due to conflicting aims and priorities little agreement
exists on a uniform set of measures, needed for comparison of e-government
development. Different stakeholders may come to
different interpretations of the status of e-government. As such the
existing instruments provide a picture of the status of e-government
that may not useful as surrogates for deducing the e-government
performance
Traditional methods of measuring e-government impact and
resource usage fall short of the richness of data required for the
effective evaluation of e-government strategies. A good
theoretical framework for measuring the impact of e-government
and the use of resources is still lacking. Due to this fact and the
many reports that are produced on e-Government developments,
based on different measurement instruments that used different
criteria, we can hardly learn from existing practices. It would be
Measurement
instrument
Focus
Update frequency
Source data
Characteristics of the
method
Overheid.nl All
public
agency
websites
Yearly
Experts
Ranking based on web
site features
Webdam
Municipality websites
Monthly (continuous)
Expert panel consisting of
3 types of representatives:
1) Civil servants, 2)
citizens and 3) experts
Ranking based on web
site features
Accenture
Comparison of countries Yearly
Accenture researchers
based on judgment of a
selected services
Ranking based inventory
of services
Regional innovation
scorecard
European
regions
Eurostat
Ranking based on
economic quantitative
indicators
488
beneficial for both citizens as for governments if such a
theoretical framework would be developed and a more or less
standardized measurement instrument could become available.
This would allow governments and designers to compare different
e-government approaches and learn from them and learning from
our experiences certainly fits within the ambition of the European
Union to become the most competitive and dynamic knowledge-based
economy of the world.
REFERENCES
[1] Accenture (2001). Governments Closing Gap Between
Political Rhetoric and eGovernment Reality,
http://www.accenture.com/xdoc/en/industries/government/20
01FullReport.pdf
.
[2] Accenture (2002). eGovernment Leadership -Realizing the
Vision,
http://www.accenture.com/xd/xd.asp?it=enWeb&xd=industri
es/government/gove_welcome.xml
[3] Accenture (2003). eGovernment Leadership: Engaging the
Customer,
http://www.accenture.com/xd/xd.asp?it=enweb&xd=industri
es/government/gove_capa_egov.xml
[4] Armour, F.J. Kaisler, S.H. and Liu, S.Y. (1999). A big-picture
look at Enterprise Architectures, IEEE IT
Professional, 1(1): 35-42.
[5] Bign, E., Moliner, M.A., and Snchez, J. (2003) Perceived
Quality and satisfaction in multiservice organizations. The
case of Spanish public services. Journal of Services
Marketing, 17(4), pp. 420-442.
[6] Boer, A. Engers, T. van and R. Winkels (2003). Using
Ontologies for Comparing and Harmonizing Legislation. In
Proceedings of the International Conference on Artificial
Intelligence and Law (ICAIL), Edinburgh (UK), ACM Press.
[7] Alexander Boer, Radboud Winkels, Rinke Hoekstra, and
Tom M. van Engers. Knowledge Management for
Legislative Drafting in an International Setting. In D.
Bourcier, editor, Legal Knowledge and Information Systems.
Jurix 2003: The Sixteenth Annual Conference., pages 91-100
, Amsterdam, 2003. IOS Press.
[8] Bons R., Ronald M.Lee and Tan, Yua-Hua, (1999). A
Formal Specification of Automated Auditing of Trustworthy
Trade Procedures for Open Electronic Commerce. Hawaii
International Conference on System Sciences (HICCS).
[9] Buckland and F. Gey (1994). The relationship between recall
and precision. Journal of the American Society for
Information Science, 45(1):12-19.
[10] Bruin, H. de (2002). Performance measurement in the public
sector: strategies to cope with the risks of performance
measurement. The International Journal of Public Sector
Management, vol. 15, no. 7, pp. 578-594.
[11] Checkland, P. (1981). Systems Thinking, Systems Practice.
Wiley, Chichester.
[12] Coase, R. (1937). The Nature of the Firm. Economia, 4: 386-405
.
[13] European Commissions (2002). 2003 European Innovation
Scoreboard: European Trend Chart on Innovation.
Innovation/SMEs Programme.
[14] European Commission (2004). Green paper on Public
private partnerships and community law on public contracts
and concessions, European Commission, no. 327.
[15] Fagan, J.C. & Fagan, B. (2004). An accessibility study of
state legislative web sites. Government Information
Quarterly, 21: 65-85.
[16] Galliers, R.D. (1992). Information Systems Research. Issues,
methods and practical guidelines. Alfred Waller, Fawley,
England.
[17] Green, S.H.B. (1998), Cyberspace winners: how they did it,
Business Week, 22 June, pp. 154-60.
[18] Groot, H., de and R. Goudriaan (1991). De productiviteit van
de overheid: over prestaties, personeel en uitgaven in de
publieke sector. Academic Service, Schoonhoven, The
Netherlands.
[19] Hazlett, S.A. and Hill, F. (2003). E-government: the realities
of using IT to transform the public sector. Managing Service
Quality, Vol. 13, No. 6, pp. 445-452.
[20] Janssen, M.F.W.H.A. (2001). Designing Electronic
Intermediaries. Doctoral Dissertation, Delft University of
Technology.
[21] Janssen, Marijn & Davidse, Anouk (2004). Evaluation of a
Performance-based Accountability System. The 4th
European Conference on E-government (ECEG), Dublin
Castle, Dublin, Ireland, 17-18 June 2004
[22] Jensen, M. and Meckling, W. (1976). Theory of the Firm:
Managerial behavior, agency costs, and capital structure,
Journal of Financial Economics, 5: 305-360.
[23] Layne, KJL & Lee, J. (2001) "Developing fully functional E-government
: A four stage model", Government Information
Quarterly, Vol 18, No. 2, pp 122-136.
[24] Lee, J.K. (2003). A model for monitoring public sector web
site strategy. Internet Research. Electronic networking
application and policy. Vol. 13, no. 4, pp259-266.
[25] Malone, T.W. & Crowston, K. (1994). The Interdisciplinary
Study of Coordination. ACM Computing Surveys, vol. 26,
no. 2, pp. 87-119.
[26] Merwe, R. van der, and Bekker,J. (2003). A framework and
methodology for evaluating e-commerce web sites. Internet
Research: electronic Networking Applications and Policy.
Vol. 13, No.5, pp. 330-341.
[27] Moon, M.J. (2002). The Evolution of E-Government Among
Municipalities; Rhetoric or reality? Public Administration
Review. Vol. 62, no. 4, pp. 424-433.
[28] Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988).
SERVQUAL: a multiple item scale for measuring consumer
perceptions of service quality, Journal of Retailing, Vol. 64,
pp. 12-40.
[29] Peters, Rob and Wilson, Frank (2003). Natural Language
access to regional information sources: the PortofRotterdam
case: 4th International Workshop on Image Analysis for
Multimedia Interactive Services, WIAMIS 2003.
489
[30] Provincial council (2003) Innovatiebrief kenniseconomie
Zuid-Holland "Kennismaken met Kenniszaken,
http://www.zuid-holland.nl/images/126_107822.pdf
, page
14.
[31] Rohleder, S.J. et al. (2004). eGovernment Leadership: High
Performance, Maximum Value. Fifth Annual Accenture
eGovernment Study. Accenture Government Executive
Studies,
http://www.accenture.com/xdoc/en/industries/government/go
ve_egov_value.pdf
[32] Simeon, R. (1999), ``Evaluating domestic and international
Web site strategies'',
Internet Research, Vol. 9 No. 4, pp.
297-308.
[33] Quinn, R.E. and Rohrbaugh, J.W. (1983). A Spatial Model of
Effectiveness criteria: Towards a competing values approach
to organizational effectiveness. Management Science 29:
363-377.
[34] Van Engers, T.M., 2004, Legal Engineering: A Knowledge
Engineering Approach To Improving Legal Quality, in
eGovernment and eDemocracy: Progress and Challenges,
Padget, J., Neira, R., De Len, J.L., Editors, Instituto
Politchnico Nacional Centro de Investigacion en
Computacin, ISBN 970-36-0152-9, p189-206.
[35] Williamson, O.E. (1975). Market and Hierarchies, Analysis
and Antitrust Implications. A study in the economics of
internal organization. Macmillan, New York.
[36] Yin, R.K. (1989). Case Study Research: Design and
methods. Sage publications, Newbury Park, California. | E-government;law;benchmark;interoperability;evaluation;measurement;architectures;e-government;business process;public administration |
137 | Minimizing Average Flow Time on Related Machines | We give the first on-line poly-logarithmic competitve algorithm for minimizing average flow time with preemption on related machines, i.e., when machines can have different speeds. This also yields the first poly-logarithmic polynomial time approximation algorithm for this problem. More specifically, we give an O(log P log S)-competitive algorithm, where P is the ratio of the biggest and the smallest processing time of a job, and S is the ratio of the highest and the smallest speed of a machine. Our algorithm also has the nice property that it is non-migratory. The scheduling algorithm is based on the concept of making jobs wait for a long enough time before scheduling them on slow machines. | INTRODUCTION
We consider the problem of scheduling jobs that arrive
over time in multiprocessor environments. This is a fundamental
scheduling problem and has many applications, e.g.,
servicing requests in web servers. The goal of a scheduling
Work done as part of the "Approximation Algorithms"
partner group of MPI-Informatik, Germany.
Supported by an IBM faculty development award and a
travel grant from the Max Plank Society.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
STOC'06, May
2123, 2006, Seattle, Washington, USA.
Copyright 2006 ACM 1-59593-134-1/06/0005 ...
$
5.00.
algorithm is to process jobs on the machines so that some
measure of performance is optimized. Perhaps the most natural
measure is the average flow time of the jobs. Flow time
of a job is defined as the difference of its completion time
and its release time, i.e., the time it spends in the system.
This problem has received considerable attention in the
recent past [1, 2, 10]. All of these works make the assumption
that the machines are identical, i.e., they have the same
speed. But it is very natural to expect that in a heterogenous
processing environment different machines will have different
processing power, and hence different speeds. In this paper
, we consider the problem of scheduling jobs on machines
with different speeds, which is also referred to as related machines
in the scheduling literature. We allow for jobs to be
preempted. Indeed, the problem turns out to be intractable
if we do not allow preemption. Kellerer et. al. [9] showed
that the problem of minimizing average flow time without
preemption has no online algorithm with o(n) competitive
ratio even on a single machine. They also showed that it
is hard to get a polynomial time O(n
1/2
)-approximation
algorithm for this problem. So preemption is a standard,
and much needed, assumption when minimizing flow time.
In the standard notation of Graham et. al. [7], we consider
the problem Q|r
j
, pmtn| P
j
F
j
. We give the first poly-logarithmic
competitive algorithm for minimizing average
flow time on related machines. More specifically, we give an
O(log
2
P log S)-competitive algorithm, where P is the ratio
of the biggest and the smallest processing time of a job, and
S is the ratio of the highest and the smallest speed of a machine
. This is also the first polynomial time poly-logarithmic
approximation algorithm for this problem. Despite its similarity
to the special case when machines are identical, this
problem is more difficult since we also have to worry about
the processing times of jobs.
Our algorithm is also non-migratory
, i.e., it processes a job on only one machine. This
is a desirable feature in many applications because moving
jobs across machines may have many overheads.
Related Work The problem of minimizing average flow
time (with preemption) on identical parallel machines has
received much attention in the past few years.
Leonardi
and Raz [10] showed that the Shortest Remaining Processing
Time (SRPT) algorithm has a competitive ratio of
O(log(min(
n
m
, P ))), where n is the number of jobs, and m
is the number of machines. A matching lower bound on the
competitive ratio of any on-line (randomized) algorithm for
this problem was also shown by the same authors. Awerbuch
et. al. [2] gave a non-migratory version of SRPT with
competitive ratio of O(log(min(n, P ))). Chekuri et. al.[5]
730
gave a non-migratory algorithm with competitive ratio of
O(log(min(
n
m
, P ))). One of the merits of their algorithm
was a much simpler analysis of the competitive ratio. Instead
of preferring jobs according to their remaining processing
times, their algorithm divides jobs into classes when they
arrive. A job goes to class k if its processing time is between
2
k-1
and 2
k
. The scheduling algorithm now prefers jobs of
smaller class irrespective of the remaining processing time.
We also use this notion of classes of jobs in our algorithm.
Azar and Avrahami[1] gave a non-migratory algorithm with
immediate dispatch, i.e., a job is sent to a machine as soon as
it arrives. Their algorithm tries to balance the load assigned
to each machine for each class of jobs. Their algorithm also
has the competitive ratio of O(log(min(
n
m
, P ))). It is also
interesting to note that these are also the best known results
in the off-line setting of this problem.
Kalyanasundaram and Pruhs[8] introduced the resource
augmentation model where the algorithm is allowed extra resources
when compared with the optimal off-line algorithm.
These extra resources can be either extra machines or extra
speed. For minimizing average flow time on identical
parallel machines, Phillips et. al. [11] showed that we can
get an optimal algorithm if we are given twice the speed
as compared to the optimal algorithm. In the case of single
machine scheduling, Bechheti et. al.[4] showed that we
can get O(1/)-competitive algorithms if we are give (1 + )
times more resources. Bansal and Pruhs[3] extended this
result to a variety of natural scheduling algorithms and to
L
p
norms of flow times of jobs as well. In case of identical
parallel machines, Chekuri et. al.[6] gave simple scheduling
algorithms which are O(1/
3
)-competitive with (1 + )
resource augmentation.
Our Techniques A natural algorithm to try here would
be SRPT. We feel that it will be very difficult to analyze
this algorithm in case of related machines. Further SRPT is
migratory. Non-migratory versions of SRPT can be shown
to have bad competitive ratios. To illustrate the ideas involved
, consider the following example. There is one fast
machine, and plenty of slow machines. Suppose many jobs
arrive at the same time. If we distribute these jobs to all the
available machines, then their processing times will be very
high. So at each time step, we need to be selective about
which machines we shall use for processing.
Ideas based
on distributing jobs in proportion to speeds of machines as
used by Azar and Avrahami[1] can also be shown to have
problems.
Our idea of selecting machines is the following. A job is
assigned to a machine only if it has waited for time which
is proportional to its processing time on this machine. The
intuition should be clear if a job is going to a slow machine,
then it can afford to wait longer before it is sent to the
machine. Hopefully in this waiting period, a faster machine
might become free in which case we shall be able to process
it in smaller time. We also use the notion of class of jobs as
introduced by Chekuri et. al.[5] which allows machines to
have a preference ordering over jobs. We feel that this idea
of delaying jobs even if a machine is available is new.
As mentioned earlier, the first challenge is to bound the
processing time of our algorithm. In fact a bulk of our paper
is about this. The basic idea used is the following if a
job is sent to a very slow machine, then it must have waited
long. But then most of this time, our algorithm would have
kept the fast machines busy. Since we are keeping fast machines
busy, the optimum also can not do much better. But
converting this idea into a proof requires several technical
steps.
The second step is of course to bound the flow time of
the jobs. It is easy to see that the total flow time of the
jobs in a schedule is same as the sum over all time t of
the number of waiting jobs at time t in the schedule. So it
would be enough if we show that for any time t, the number
of jobs which are waiting in our schedule is close to that
in the optimal schedule. Chekuri et. al. [5] argue this in
the following manner. Consider a time t. They show that
there is a time t < t such that the number of waiting jobs
of a certain class k or less in both the optimal and their
schedule is about the same (this is not completely accurate,
but captures the main idea). Further they show that t is
such that all machines are busy processing jobs of class k or
less during (t , t). So it follows that the number of waiting
jobs of this class or less at time t are about the same in both
the schedules. We can not use this idea because we would
never be able to keep all machines busy (some machines can
be very slow). So we have to define a sequence of time steps
like t for each time and make clever use of geometric scaling
to show that the flow time is bounded.
PRELIMINARIES
We consider the on-line problem of minimizing total flow
time for related machines. Each job j has a processing requirement
of p
j
and a release date of r
j
. There are m machines
, numbered from 1 to m. The machines can have
different speeds, and the processing time of a job j on a
machine is p
j
divided by the speed of the machine. The
slowness of machine is the reciprocal of its speed. It will be
easier to deal with slowness, and so we shall use slowness
instead of speed in the foregoing discussion. Let s
i
denote
the slowness of machine i. So the time taken by job j to
process on machine i is p
j
s
i
. Assume that the machines
have been numbered so that s
1
s
2
s
m
. We shall
assume without loss of generality that processing time, release
dates, and slowness are integers. We shall use the term
volume of a set of jobs for denoting their processing time on
a unit speed machine.
Let
A be a scheduling algorithm. The completion time
of a job j in A is denoted by C
A
j
. The flow time of j in A
is defined as F
A
j
= C
A
j
- r
j
. Our goal is to find an on-line
scheduling algorithm which minimizes the total flow time of
jobs. Let
O denote the optimal off-line scheduling algorithm.
We now develop some notations. Let P denote the ratio of
the largest to the smallest processing times of the jobs, and
S be the ratio of the largest to the smallest slowness of the
machines. For ease of notation, we assume that the smallest
processing requirement of any job is 1, and the smallest slowness
of a machine is 1. Let and be suitable chosen large
enough constants. We divide the jobs and the machines into
classes. A job j is said to be in class k if p
j
[
k-1
,
k
)
and a machine i is said to be in class l if s
i
[
l-1
,
l
).
Note that there are O(log P ) classes for jobs and O(log S)
classes for machines. Given a schedule
A, we say that a job
j is active at time t in A if r
j
t but j has not finished
processing by time t in A.
SCHEDULING ALGORITHM
We now describe the scheduling algorithm. The under-731
lying idea of the algorithm is the following -- if we send a
job j to machine i, we make sure that it waits for at least
p
j
s
i
units of time (which is its processing time on machine
i). Intuitively, the extra waiting time can be charged to its
processing time. Of course, we still need to make sure that
the processing time does not blow up in this process.
The algorithm maintains a central pool of jobs. When a
new job gets released, it first goes to the central pool and
waits there to get assigned to a machine. Let W (t) denote
the set of jobs in the central pool at time t. Our algorithm
will assign each job to a unique machine -- if a job j gets
assigned to a machine i, then j will get processed by machine
i only. Let A
i
(t) be the set of active jobs at time t which
have been assigned to machine i. We shall maintain the
invariant that A
i
(t) contains at most one job of each class.
So
|A
i
(t)| log P .
We say that a job j W (t) of class k is mature for a
machine i of class l at time t, if it has waited for at least
k
l
time in the central pool, i.e., t - r
j
k
l
. For
any time t, we define a total order on the jobs in W (t)
as follows j j if either (i) class(j) < class(j ), or (ii)
class(j) = class(j ) and r
j
> r
j
(in case class(j) = class(j )
and r
j
= r
j
we can order them arbitrarily).
Now we describe the actual details of the algorithm. Initially
, at time 0, A
i
(0) is empty for each machine.
At
each time t, the algorithm considers machines in the order
1, 2, . . . , m (recall that the machines have been arranged in
the ascending order of their slowness). Let us describe the
algorithm when it considers machine i. Let M
i
(t) be the
jobs in W (t) which are mature for machine i at time t. Let
j M
i
(t) be the smallest job according to the total order
. If class(j) < class(j ) for all jobs j A
i
(t), then we
assign j to machine i (i.e., we delete j from W (t) and add
it to A
i
(t)).
Once the algorithm has considered all the machines at
time t, each machine i processes the job of smallest class
in A
i
(t) at time t. This completes the description of our
algorithm. It is also easy to see that the machines need to
perform the above steps for only a polynomial number of
time steps t (i.e., when a job finishes or matures for a class
of machines).
We remark that both the conditions in the definition of
are necessary. The condition (i) is clear because it prefers
smaller jobs. Condition (ii) makes sure that we make old
jobs wait longer so that they can mature for slower machines.
It is easy to construct examples where if we do not obey
condition (ii) then slow machines will never get used and so
we will incur very high flow time.
ANALYSIS
We now show that the flow time incurred by our algorithm
is within poly-logarithmic factor of that of the optimal algorithm
. The outline of the proof is as follows. We first
argue that the total processing time incurred by our algorithm
is not too large. Once we have shown this, we can
charge the waiting time of all the jobs in A
i
(t) for all machines
i and time t to the total processing time. After this,
we show that the total waiting time of the jobs in the central
pool is also bounded by poly-logarithmic factor times
the optimum's flow time.
Let
A denote our algorithm. For a job j, define the dispatch
time d
A
j
of j in A as the time at which it is assigned to
a machine. For a job j and class l of machines, let t
M
(j, l)
denote the time at which j matures for machines of class l,
i.e., t
M
(j, l) = r
j
+
k
l
, where k is the class of job j. Let
F
A
denote the total flow time of our algorithm. For a job j
let P
A
j
denote the time it takes to process job j in A (i.e.,
the processing time of j in A). Similarly, for a set of jobs
J define P
A
J
as the total processing time incurred by
A on
these jobs. Let P
A
denote the sum P
j
P
A
j
, i.e., the total
processing time incurred by
A. Define F
O
, P
O
j
, P
O
J
and P
O
similarly. Let m
l
denote the number of machines of class l.
4.1
Bounding the Processing Time
We shall now compare P
A
with F
O
. For each value of
class k of jobs and class l of machines, let J(k, l) be the jobs
of class k which get processed by A on machines of class
l. For each value of k and l, we shall bound the processing
time incurred by
A on J(k, l). So fix a class k of jobs and
class l of machines.
The idea of the proof is as follows. We shall divide the
time line into intervals I
1
= (t
1
b
, t
1
e
), I
2
= (t
2
b
, t
2
e
), . . . so that
each interval I
q
satisfies the following property
A is almost
busy in I
q
processing jobs of class at most k on machines
of class l - 1 or less. Further these jobs have release time
at least t
q
b
. We shall denote these jobs by H
q
. Now, if
O
schedules jobs in J(k, l) on machines of class l or more, we
have nothing to prove since
O would incur at least as much
processing time as we do for these jobs. If
O schedules some
jobs in J(k, l) on machines of class l - 1 or less during the
interval I
q
, then one of these two cases must happen (i)
some jobs in H
q
need to be processed on machines of class
l or more, or (ii) some jobs in H
q
get processed after time
t
q
e
. We shall show that both of these cases are good for us
and we can charge the processing times of jobs in J(k, l) to
the flow time of jobs in H
q
in
O.
Let us formalize these ideas (see Figure 1). The starting
points of the intervals I
1
, I
2
, . . . will be in decreasing order,
i.e., t
1
b
> t
2
b
> (so we shall work backwards in time while
defining these intervals). Suppose we have defined intervals
I
1
, . . . , I
q-1
so far. Let J
q
(k, l) denote the set of jobs in
J(k, l) which get released before interval I
q-1
begins, i.e.,
before t
q-1
b
(J
1
(k, l) is defined as J(k, l)).
Now we define I
q
. Let j
q
0
J
q
(k, l) be the job with the
highest dispatch time. Let r
q
0
denote the release time of j
q
0
.
Let k
q
0
denote the class of job j
q
0
. Let d
q
0
denote the dispatch
time of j
q
0
. The right end-point t
q
e
of I
q
is defined as d
q
0
.
Consider the jobs in J(k, l) which get dispatched during
(r
q
0
, d
q
0
). Let j
q
1
be such a job with the earliest release date.
Define r
q
1
, k
q
1
, d
q
1
similarly.
Let H
q
0
(l ), l < l, be the set of jobs of class at most k
q
0
which are dispatched to a machine of class l during the time
interval (t
M
(j
q
0
, l ), d
q
0
). Note that the phrase "class at most
k
q
0
" in the previous sentence is redundant because we cannot
dispatch a job of class greater than k
q
0
during (t
M
(j
q
0
, l ), d
q
0
)
on machines of class l (otherwise we should have dispatched
j
q
0
earlier). Let H
q
0
denote
l-1
l =1
H
q
0
(l ). Define H
q
1
(l ), H
q
1
similarly.
If all jobs in H
q
1
H
q
0
get released after r
q
1
, we stop the
process here. Otherwise find a job in H
q
1
H
q
0
which has the
earliest release date, let j
q
2
denote this job. As before define
r
q
2
as the release date of j
q
2
, and k
q
2
as the class of j
q
2
. Again
define H
q
2
(l ) as the set of jobs (of class at most k
q
2
) which
get dispatched on machines of class l during (t
M
(j
q
2
, l ), d
q
2
).
Define H
q
2
analogously.
So now assume we have defined
j
q
0
, j
q
1
, . . . , j
q
i
and H
q
0
, H
q
1
, . . . , H
q
i
, i 2. If all jobs in H
q
i
are
732
j
q
3
j
q+1
0
I
q
j
q
2
I
q-1
r
q
0
d
q
0
= t
q
e
d
q
2
r
q
1
d
q
3
r
q
2
r
q
3
j
q
0
l
l
l - 1
l - 2
1
j
q
1
Figure 1: An illustration of the notation used
released after r
q
i
, then we stop the process. Otherwise, define
j
q
i+1
as the job in H
q
i
with the earliest release date. Define
H
q
i+1
in a similar manner (see Figure 1). We remark that
the first two steps of this process differ from the subsequent
steps. This we will see later is requierd to ensure that the
intervals do not overlap too much. The following simple
observation shows that this process will stop.
Claim 4.1. For i 2, k
q
i
< k
q
i-1
.
Proof. Consider the job j
q
i
H
q
i-1
(l ). A prefers j
q
i
over
j
q
i-1
. If class(j
q
i
) = class(j
q
i-1
), release time of j
q
i
must be
at least that of j
q
i-1
. But this is not the case. Thus, class of
j
q
i
must be less than that of j
q
i-1
.
Suppose this process stops after u
q
steps. Define the beginning
of I
q
, i.e., t
q
b
as r
q
u
q
.
This completes our description of I
q
.
Let H
q
denote
i
H
q
i
. We are now ready to show that interval I
q
is sufficiently
long, and that for a large fraction of this time, all
machines of class less than l are processing jobs of class less
than k which are released in this interval itself. This would
be valuable in later arguing that
O could not have sched-uled
jobs of J(k, l) on the lower class machines and thereby
incurred small processing time.
Lemma 4.2. Length of the interval I
q
is at least
k
l
.
Proof. Indeed, job j
q
0
must wait for at least
k
l
amount
of time before being dispatched to a machine of class l. So
t
q
e
- t
q
s
d
q
0
- r
q
0
k
l
.
Lemma 4.3. H
q
consists of jobs of class at most k and
all jobs in H
q
are released after t
q
b
.
Proof. The first statement is clear from the definition
of H
q
. Let us look at H
q
i
. As argued in proof of Claim 4.1
all jobs of class k
q
i
in H
q
i
are released after r
q
i
. If all jobs of
class less than k
q
i
in H
q
i
are released after r
q
i
, then we are
done. Otherwise, all such jobs are released after r
q
i+1
(after
r
q
2
if i = 0). This completes the proof of the lemma.
Lemma 4.4. A machine of class l < l processes jobs of
H
q
for at least
|I
q
| - 6
k
l
amount of time during I
q
.
Proof. Let us fix a machine i of class l < l. Let us
consider the interval (r
q
p
, d
q
p
). Job j
q
p
matures for i at time
t
M
(j
q
p
, l ). So machine i must be busy during (t
M
(j
q
p
, l ), d
q
p
),
otherwise we could have dispatched j
q
p
earlier. We now want
to argue that machine i is mainly busy during this interval
processing jobs from H
q
.
Let j be a job which is processed by i during (t
M
(j
q
p
, l ), d
q
p
).
We have already noted that j can be of class at most k
q
p
. If j
gets dispatched after t
M
(j
q
p
, l ), then by definition it belongs
to H
q
. If j gets dispatched before t
M
(j
q
p
, l ), it must belong
to A
i
(t ). But A
i
(t ) can contain at most one job of each
class. So the total processing time taken by jobs of A
i
(t )
during (t , d
p
) is at most
l
(+
2
+
+
k
q
p
)
3/2
k
q
p
l
.
So during (r
q
p
, d
q
p
), i processes jobs from H
q
except perhaps
for a period of length (t
M
(j
q
p
, l ) - r
q
p
) + 3/2
k
q
p
l
=
5/2
k
q
p
l
. Since
p
(r
q
p
, d
p
) covers I
q
, the amount of time
i does not process jobs from H
q
is at most 5/2
l
(
k
+
k
+
+ 1), which proves the lemma.
We would like to charge the processing time for jobs in
J(k, l) in our solution to the flow time of jobs in H
q
in
O.
But to do this we require that the sets H
q
be disjoint. We
next prove that this is almost true.
Lemma 4.5. For any q, I
q
and I
q+2
are disjoint. Hence
H
q
and H
q+2
are also disjoint.
Proof. Recall that J
q+1
(k, l) is the set of jobs in J(k, l)
which get released before t
q
b
. However some of these jobs
may get dispatched after t
q
b
and hence I
q
and I
q+1
can intersect
.
Consider some job j J
q+1
(k, l) which is dispatched during
I
q
. Now, observe that j
q+1
0
is released before t
q
b
(by
definition of J
q+1
(k, l)) and dispatched after j. So, j gets
dispatched in (r
q
0
, d
q
0
). This means that release date of j
q+1
1
must be before release date of j. But t
q+1
b
r
q+1
1
and so, j
is released after t
q+1
b
. But then j /
J
q+2
(k, l). So all jobs
in J
q+2
(k, l) get dispatched before I
q
begins, which implies
that I
q
and I
q+2
are disjoint.
Consider an interval I
q
. Let D
q
(k, l) denote the jobs in
J(k, l) which get released after t
q
b
but dispatched before t
q
e
.
It is easy to see that D
q
(k, l) is a superset of J
q
(k, l) J
q+1
(k, l). So
q
D
q
(k, l) covers all of J(k, l).
Now we would like to charge the processing time of jobs
in D
q
(k, l) to the flow time incurred by O on the jobs in H
q
.
Before we do this, let us first show that
O incurs significant
733
amount of flow time processing the jobs in H
q
.
This is
proved in the technical theorem below whose proof we defer
to the appendix.
Theorem 4.6. Consider a time interval I = (t
b
, t
e
) of
length T . Suppose there exists a set of jobs J
I
such that
every job j J
I
is of class at most k and is released after
t
b
. Further
A dispatches all the jobs in J
I
during I and only
on machines of class less than l. Further, each machine i
of class l < l satisfies the following condition : machine i
processes jobs from J
I
for at least T - 6
k
l
amount of
time. Assuming T
k
l
, the flow time F
O
J
I
incurred by
O on the jobs in J
I
is at least `P
A
J
I
.
Substituting t
b
= t
q
b
, t
e
= t
q
e
, I = I
q
, T = |I
q
|, J
I
= H
q
in
the statement of Theorem 4.6 and using Lemmas 4.2, 4.3,
4.4, we get
P
A
H
q
O(F
O
H
q
).
(1)
We are now ready to prove the main theorem of this section
.
Theorem 4.7. The processing time incurred by A on
D
q
(k, l), namely P
A
D
q
(k,l)
, is O(F
O
H
q
+ F
O
D
q
(k,l)
).
Proof. Let V denote the volume of D
q
(k, l). By Lemma
4.4, machines of class l do not process jobs from H
q
for at
most 6
k
l
units of time during I
q
. This period translates
to a job-volume of at most 6
k
. If V is sufficiently small
then it can be charged to the processing time (or equivalently
the flow time in
O) of the jobs in H
q
.
Lemma 4.8. If V c
k
(m
0
+ . . . + m
l-1
) where c is
a constant, then P
A
D
q
(k,l)
is O(F
O
H
q
).
Proof. The processing time incurred by A on D
q
(k, l) is
at most
l
V = O(
k
l+1
(m
0
+ . . .+m
l-1
)). Now, Lemmas
4.2 and 4.4 imply that machines of class l < l process jobs
from H
q
for at least
k
l
/2 amount of time. So P
A
H
q
is at
least
k
l
(m
0
+ . . . + m
l-1
)/2. Using equation (1), we get
the result.
So from now on we shall assume that V c
k+1
(m
0
+
. . . + m
l-1
) for a sufficiently large constant c. We deal with
easy cases first :
Lemma 4.9. In each of the following cases P
A
D
q
(k,l)
is
O(F
O
D
q
(k,l)
+ F
O
H
q
) :
(i) At least V /2 volume of D
q
(k, l) is processed on machines
of class at least l by O.
(ii) At least V /4 volume of D
q
(k, l) is finished by O after
time t
q
e
k
l
/2.
(iii) At least V /8 volume of H
q
is processed by
O on machines
of class l or more.
Proof. Note that P
A
D
q
(k,l)
is at most V
l
. If (i) happens,
O pays at least V/2
l-1
amount of processing time for
D
q
(k, l) and so we are done. Suppose case (ii) occurs. All
jobs in D
q
(k, l) get dispatched by time t
q
e
. So they must get
released before t
q
e
k
l
. So at least 1/4 volume of these
jobs wait for at least
k
l
/2 amount of time in O. This
means that F
O
D
q
(k,l)
is at least V /8
l
, because each job has
size at most
k
. This again implies the lemma. Case (iii) is
similar to case (i).
So we can assume that none of the cases in Lemma 4.9
occur.
Now, consider a time t between t
q
e
k
l
/2 and
t
q
e
. Let us look at machines of class 1 to l - 1. O finishes
at least V /4 volume of D
q
(k, l) on these machines before
time t. Further, at most V /8 volume of H
q
goes out from
these machines to machines of higher class. The volume that
corresponds to time slots in which these machines do not
process jobs of H
q
in
A is at most V/16 (for a sufficiently
large constant c). So, at least V /16 amount of volume of
H
q
must be waiting in
O at time t. So the total flow time
incurred by
O on H
q
is at least V /(16
k
)
k
l
/2 which
again is (V
l
). This proves the theorem.
Combining the above theorem with Lemma 4.5, we get
Corollary 4.10. P
A
J(k,l)
is O(F
O
), and so P
A
is
O(log S log P F
O
).
4.2
Bounding the flow time
We now show that the average flow time incurred by our
algorithm is within poly-logarithmic factor of that incurred
by the optimal solution. We shall say that a job j is waiting
at time t in A if it is in the central pool at time t in A and
has been released before time t.
Let V
A
k
(t) denote the volume of jobs of class at most
k which are waiting at time t in A. Define V
O
k
(t) as the
remaining volume of jobs of class at most k which are active
at time t in O. Note the slight difference in the definitions
for
A and O -- in case of A, we are counting only those jobs
which are in the central pool, while in
O we are counting
all active jobs. Our goal is to show that for all values of k,
P
t
V
A
k
(t) is bounded from above by P
t
V
O
k
(t) plus small
additive factors, i.e., O(
k
(P
O
+ P
A
)). Once we have this,
the desired result will follow from standard calculations.
Before we go to the details of the analysis, let us first show
how to prove such a fact when all machines are of the same
speed, say 1. The argument for this case follows directly
from [5], but we describe it to develop some intuition for the
more general case. Fix a time t and class k of jobs. Suppose
all machines are processing jobs of class at most k at time
t in our schedule. Let t be the first time before t at which
there is at least one machine in
A which is not processing
jobs of class k or less (if there is no such time, set t as
0). It follows that at time t there are no jobs of class k or
less which are mature for these machines (since all machines
are identical, a job becomes mature for all the machines
at the same time). So these jobs must have been released
after t k
. During (t k
, t ), the optimal schedule can
process at most m
k
volume of jobs of class at most k. So
it follows that V
A
k
(t ) - V
O
k
(t ) is at most m
k
. Since our
schedule processes jobs of class at most k during (t , t), we
get V
A
k
(t) - V
O
k
(t) x
A
(t)
k
, where x
A
(t) denotes the
number of busy machines at timer t in our schedule (x
A
(t)
is same as m because all machines are busy at time t). The
other case when a machine may be processing jobs of class
more than k or remain idle at time t can be shown similarly
to yield the same expression. Adding this for all values of t,
we get P
t
V
A
k
(t) - P
t
V
O
k
(t) is at most
k
P
A
, which is
what we wanted to show.
This argument does not extend to the case when machines
can have different speeds. There might be a very slow machine
which is not processing jobs of class k or less (it may
be idle), but then the jobs which are waiting could have
been released much earlier and so we cannot argue that the
734
volume of remaining jobs of class at most k at time t in the
two schedules are close. This complicates our proof and instead
of defining one time t as above, we need to define a
sequence of times.
Before we describe this process, we need more notations.
These are summarized in the table below for ease of reference
.
Let j
k
(t) J
k
(t) be the job with the earliest release date.
Let r
k
(t) denote the release date of job j
k
(t), and c
k
(t) denote
the class of j
k
(t). Let l
k
(t) denote the largest l such that
j
k
(t) has matured for machines of class l at time t. In other
words, l
k
(t) is the highest value of l such that t
M
(j, l) t.
Observe that all machines of class l
k
(t) or less must be busy
processing jobs of class at most c
k
(t) at time t, otherwise our
algorithm should have dispatched j by time t. Our proof will
use the following lemma.
Lemma 4.11. For any class k of jobs and time t,
V
A
k
(t) - V
O
k
(t) 2
c
k
(t)
m
(l
k
(t)-1)
+ V
A
(c
k
(t)-1)
(r
k
(t)) - V
O
(c
k
(t)-1)
(r
k
(t))
+ X
ll
k
(t)
P
O
l
(r
k
(t), t)
l-1
Proof. Let V
A
denote the volume of jobs of class at most
k which are processed by A on machines of class l
k
(t) - 1 or
less during (r
k
(t), t). Define U
A
as the volume of jobs of class
at most k which are processed by A on machines of class
l
k
(t) or more during (r
k
(t), t). Define V
O
and U
O
similarly.
Clearly, V
A
k
(t) - V
O
k
(t) = V
A
k
(r
k
(t)) - V
O
k
(r
k
(t)) + (V
O
V
A
) + (U
O
- U
A
).
Let us look at V
O
- V
A
first.
Any machine of class
l l
k
(t) - 1 is busy in A during (t
M
(j
k
(t), l ), t) processing
jobs of class at most c
k
(t). The amount of volume O can
process on such machines during (r
k
(t), t
M
(j
k
(t), l )) is at
most
m
l
ck(t)
l
l -1
, which is at most m
l
c
k
(t)
. So we
get V
O
- V
A
m
(l
k
(t)-1)
c
k
(t)
.
Let us now consider V
A
k
(r
k
(t)) - V
O
k
(r
k
(t)). Let us consider
jobs of class c
k
(t) or more which are waiting at time
r
k
(t) in A (so they were released before r
k
(t)). By our definition
of j
k
(t), all such jobs must be processed by time
t in A. If l l
k
(t) - 1, then such jobs can be done on
machines of class l only during (t
M
(j
k
(t), l ), t). So again
we can show that the total volume of such jobs is at most
m
(l
k
(t)-1)
c
k
(t)
+ U
A
.
Thus we get V
A
k
(r
k
(t)) V
O
k
(r
k
(t)) m
(l
k
(t)-1)
c
k
(t)
+U
A
+V
A
(c
k
(t)-1)
(r
k
(t))V
O
(c
k
(t)-1)
(r
k
(t)), because V
O
(c
k
(t)-1)
(r
k
(t)) V
O
k
(r
k
(t)).
Finally note that U
O
is at most P
ll
k
(t) P
O
l
(r
k
(t),t)
l-1
. Combining
everything, we get the result.
The rest of the proof is really about unraveling the expression
in the lemma above. To illustrate the ideas involved,
let us try to prove the special case for jobs of class 1 only.
The lemma above implies that V
A
1
(t) - V
O
1
(t) 2
m
(l
1
(t)-1)
+P
ll
1
(t) P
O
l
(r
1
(t),t)
l-1
. We are really interested in
P
t
(V
A
1
(t) - V
O
1
(t)). Now, the sum P
t
m
(l
1
(t)-1)
is not a
problem because we know that at time t all machines of class
l
1
(t) or less must be busy in A. So, x
A
(t) m
(l
1
(t)-1)
. So
P
t
m
(l
1
(t)-1)
is at most P
t
x
A
(t), which is the total processing
time of
A. It is little tricky to bound the second
term. We shall write P
ll
1
(t) P
O
l
(r
1
(t),t)
l-1
as
P
ll
1
(t)
P
t
t =r
1
(t) x
O
l
(t )
l-1
.
We can think of this as saying
that at time time t , r
1
(t) t t, we are charging 1/
l-1
amount to each machine of class l which is busy at time t
in
O. Note that here l is at least l
1
(t).
Now we consider P
t
P
ll
1
(t)
P
t
t =r
1
(t) x
O
l
(t )
l-1
. For a fixed
time t and a machine i of class l which is busy at time t
in
O, let us see for how many times t we charge to i. We
charge to i at time t if t lies in the interval (r
1
(t), t), and
l l
1
(t). Suppose this happens. We claim that t - t has
to be at most
l+1
. Indeed otherwise t - r
1
(t)
l+1
and so j
1
(t) has matured for machines of class l + 1 as well.
But then l < l
1
(t). So the total amount of charge machine i
gets at time t is at most
l+1
1/
l-1
= O(1). Thus, the
total sum turns out to be at most a constant times the total
processing time of
O.
Let us now try to prove this for the general case. We
build some notation first. Fix a time t and class k. We
shall define a sequence of times t
0
, t
1
, and a sequence of
jobs j
0
(t), j
1
(t), . . . associated with this sequence of times.
c
i
(t) shall denote the class of the job j
i
(t). Let us see how
we define this sequence. First of all t
0
= t, and j
0
(t) is
the job j
k
(t) (as defined above). Recall the definition of
j
k
(t) it is the job with the earliest release date among all
jobs of class at most k which are waiting at time t in A.
Note that c
0
(t), the class of this job, can be less than k.
Now suppose we have defined t
0
, . . . , t
i
and j
0
(t), . . . , j
i
(t),
i 0. t
i+1
is the release date of j
i
(t). j
i+1
(t) is defined as
the job j
c
i
(t)-1
(t
i+1
), i.e., the job with the earliest release
date among all jobs of class less than c
i
(t) waiting at time
t
i+1
in
A. We shall also define a sequence of classes of
machines l
0
(t), l
1
(t), . . . in the following manner l
i
(t) is the
highest class l of machines such that job j
i
(t) has matured
for machines of class l at time t
i
. Figure 2 illustrates these
definitions. The vertical line at time t
i
denotes l
i
(t) the
height of this line is proportional to l
i
(t).
We note the following simple fact.
Claim 4.12.
l
i
(t)
c
i
(t)
t
i
- t
i+1
l
i
(t)+1
c
i
(t)
.
Proof. Indeed t
i+1
is the release date of j
i
(t) and j
i
(t)
matures for machines of class l
i
(t) at time t
i
, but not for
machines of class l
i+1
(t) at time t
i
.
The statement in Lemma 4.11 can be unrolled iteratively
to give the following inequality :
V
A
k
(t) - V
O
k
(t) 2 (
c
0
(t)
m
<l
0
(t)
+
c
1
(t)
m
<l
1
(t)
+
) +
0
@ X
ll
0
(t)
P
O
l
(t
1
, t
0
)
l-1
+ X
ll
1
(t)
P
O
l
(t
2
, t
1
)
l-1
+
1
A (2)
Let us try to see what the inequality means. First look
at the term
c
i
(t)
m
<l
i
(t)
.
Consider a machine of class
l < l
i
(t). It is busy in A during (t
M
(j
i
(t), l), t
i
).
Now
t
i
- t
M
(j
i
(t), l) = (t
i
- t
i+1
)
- (t
M
(j
i
(t), l) - t
i+1
)
l
i
(t)
c
i
(t)
c
i
(t)
l
c
i
(t)
l
, as l < l
i
(t). Hence machine l is
also busy in
A during (t
i
c
i
(t)
l
, t
i
), So the term
c
i
(t)
m
<l
i
(t)
is essentially saying that we charge 1/
l
amount to
each machine of class l < l
i
(t) during each time in (t
i
735
x
A
(t), x
O
(t)
Number of machines busy at time t in A, O.
x
A
l
(t), x
O
l
(t)
Number of machines of class l busy at time t in A, O.
P
A
l
(t
1
, t
2
), P
O
l
(t
1
, t
2
)
Total processing time incurred by machines of class l
during (t
1
, t
2
) in
A, O.
m
l
, m
l
, m
<l
Number of machines of class l, at most l, less than l.
m
(l
1
,l
2
)
Number of machines of class between (and including) l
1
and l
2
.
J(k, t)
Set of jobs of class at most k which are waiting at time t in A.
Table 1: Table of definitions
t
t
t
t
t
1
0
2
3
4
l
l
l
l
l
0
1
2
3
4
t
5
5
l
t
6
r
jj
j
r
r
r
r
r
j
j
j
j
j
0
1
2
3
4
5
Figure 2: Illustrating the definitions of t
0
, . . . and c
0
(t), . . ..
c
i
(t)
l
, t
i
). These intervals are shown in figure 2 using
light shade. So a machine i of class l which is busy in A
during these lightly shaded regions is charged 1/
l
units.
Let us now look at the term P
ll
i
(t) P
O
l
(t
i+1
,t
i
)
l-1
. This
means the following consider a machine of class l l
i
(t)
which is busy in
O at some time t during (t
i+1
, t
i
) then
we charge it 1/
l-1
units. Figure 2 illustrates this fact
we charge 1/
l-1
to all machines of class l l
i
(t) which are
busy in
O during the darkly shaded regions.
Let us see how we can simplify the picture. We say that
the index i is suffix maximum if l
i
(t) > l
i-1
(t), . . . , l
0
(t)
(i = 0 is always suffix maximum). In Figure 2, t
0
, t
2
and
t
5
are suffix maximum.
Let the indices which are suffix
maximum be i
0
= 0 < i
1
< i
2
< . . .. The following lemma
says that we can consider only suffix maximum indices in
(2). We defer its proof to the appendix.
Lemma 4.13.
V
A
k
(t) - V
O
k
(t) 4
2
X
i
u
c
iu
(t)
m
(l
iu-1
(t),l
iu
(t)-1)
+ X
i
u
X
ll
iu
(t)
P
O
l
(t
i
u+1
, t
i
u
)
l-1
,
where i
u
varies over the suffix maximum indices (define l
i
-1
(t)
as 1). Recall that m
(l
1
,l
2
)
denotes the number of machines
of class between l
1
and l
2
.
Figure 3 shows what Lemma 4.13 is saying. In the lightly
shaded region, if there is a machine of class l which is busy
at some time t in A, we charge it 1/
l
units at time t . In
the darkly shaded region if there is a machine i of class l
which is busy at time t in O we charge it 1/
l-1
units.
Now we try to bound the charges on the darkly shaded
region for all time t. Let us fix a machine h of class l.
Suppose h is busy in O at time t . We are charging 1/
l-1
amount to h at time t if the following condition holds : for
all i such that t
i
t , l
i
(t) is at most l. Now we ask, if we
fix t , for how many values of t do we charge h at time t ?
The following claim shows that this can not be too large.
Claim 4.14. Given a machine h of class l, we can charge
it for the darkly shaded region at time t for at most 2
l+1
k
values of t.
Proof. Suppose we charge h at time t for some value
of t. Fix this t. Clearly t t . Let i be the largest index
such that t t
i
. So t - t t - t
i+1
. Now consider any
i i. Lemma 4.12 implies that t
i
- t
i +1
l
i
(t)+1
i
(t)
l+1
c
i
(t)
. Since c
i
(t) decrease as i increases,
P
i
i =0
(t
i
- t
i +1
)
2
l+1
k
. This implies the claim.
So the total amount of charge to machine h at time t is
at most 2
2
k
. Thus we get the following fact :
X
t
0
@X
i
u
X
ll
iu
(t)
P
O
l
(t
i
u
+1
, t
i
u
)
l-1
1
A 2
2
k
P
O
(3)
Now we look at the charges on the lightly shaded region.
Let h be a machine of class l which is busy in A at time t .
As argued earlier, we charge 1/
l
units to h at time t if the
following condition holds there exists a suffix maximum
index i
u
such that t lies in the interval (t
i
u
c
iu
(t)
l
, t
i
u
).
Further for all suffix maximum indices i < i
u
, it must be
the case that l
i
(t) < l. Now we want to know for how many
values of t do we charge h at time t .
736
t
t
t
t
t
1
0
2
3
4
l
l
l
l
l
0
1
2
3
4
t
5
5
l
t
6
r
jj
j
r
r
r
r
r
j
j
j
j
j
0
1
2
3
4
5
Figure 3: Illustrating Lemma 4.13
Claim 4.15. Given a machine h of class l, we can charge
it for the lightly shaded region at time t for at most 3
l+1
k
values of t.
Proof. Fix a time t such that while accounting for V
A
k
(t)
we charge h at time t . So there is a maximum index i
u
such
that t
i
u
- t
c
iu
(t)
l
. Further, if i is an index less than
i
u
, then l
i
(t) must be less than l. We can argue as in the
proof of Claim 4.14 that t - t
i
u
can be at most 2
l+1
k
.
So t - t
3
l+1
k
.
So we get
X
t
X
i
u
c
iu
(t)
m
(l
iu-1
(t),l
iu
(t)-1)
!
3
k
P
A
(4)
Putting everything together, we see that Lemma 4.13, and
equations (3) and (4) imply that there is a constant c such
that X
t
V
A
k
(t) X
t
V
O
k
(t) + c
k
(P
O
+ P
A
).
(5)
The final result now follows from standard calculations.
Theorem 4.16. F
A
is O(log S log
2
P F
O
).
Proof. We have already bounded the processing time of
A. Once a job gets dispatched to a machine i, its waiting
time can be charged to the processing done by i. Since at
any time t, there are at most log P active jobs dispatched to
a machine, the total waiting time of jobs after their dispatch
time is at most O(log P P
A
). So we just need to bound the
time for which jobs are waiting in the central pool.
Let n
A
k
(t) be the number of jobs of class k waiting in the
central pool at time t in our algorithm. Let n
O
k
(t) be the
number of jobs of class k which are active at time t in O
(note the difference in the definitions of the two quantities).
Since jobs waiting in the central pool in
A have not been
processed at all, it is easy to see that n
A
k
(t)
V
A
k
(t)
k
. Further
, V
O
k
(t)
k
n
O
k
(t) + + n
O
1
(t). Combining these
observations with equation (5), we get for all values of k,
X
t
n
A
k
(t) X
t
,,
n
O
k
(t) + n
O
k-1
(t)
+
+ n
O
1
(t)
k-1
+c (P
O
+ P
A
).
We know that total flow time of a schedule is equal to the
sum over all time t of the number of active jobs at time t in
the schedule. So adding the equation above for all values of
k and using Corollary 4.10 implies the theorem.
ACKNOWLEDGEMENTS
We would like to express our thanks to Gagan Goel, Vinayaka
Pandit, Yogish Sabharwal and Raghavendra Udupa for useful
discussions.
REFERENCES
[1] N. Avrahami and Y. Azar. Minimizing total flow time
and total completion time with immediate
dispatching. In Proc. 15th Symp. on Parallel
Algorithms and Architectures (SPAA), pages 1118.
ACM, 2003.
[2] Baruch Awerbuch, Yossi Azar, Stefano Leonardi, and
Oded Regev. Minimizing the flow time without
migration. In ACM Symposium on Theory of
Computing, pages 198205, 1999.
[3] N. Bansal and K. Pruhs. Server scheduling in the l p
norm: A rising tide lifts all boats. In ACM Symposium
on Theory of Computing, pages 242250, 2003.
[4] Luca Becchetti, Stefano Leonardi, Alberto
Marchetti-Spaccamela, and Kirk R. Pruhs. Online
weighted flow time and deadline scheduling. Lecture
Notes in Computer Science, 2129:3647, 2001.
[5] C. Chekuri, S. Khanna, and A. Zhu. Algorithms for
weighted flow time. In ACM Symposium on Theory of
Computing, pages 8493. ACM, 2001.
[6] Chandra Chekuri, Ashish Goel, Sanjeev Khanna, and
Amit Kumar. Multi-processor scheduling to minimize
flow time with epsilon resource augmentation. In
ACM Symposium on Theory of Computing, pages
363372, 2004.
[7] R. L. Graham, E. L. Lawler, J. K. Lenstra, and A. H.
G. Rinnooy Kan. Optimization and approximation in
deterministic sequencing and scheduling : a survey.
Ann. Discrete Math., 5:287326, 1979.
[8] Bala Kalyanasundaram and Kirk Pruhs. Speed is as
powerful as clairvoyance. In IEEE Symposium on
737
Foundations of Computer Science, pages 214221,
1995.
[9] Hans Kellerer, Thomas Tautenhahn, and Gerhard J.
Woeginger. Approximability and nonapproximability
results for minimizing total flow time on a single
machine. In ACM Symposium on Theory of Cmputing,
pages 418426, 1996.
[10] Stefano Leonardi and Danny Raz. Approximating
total flow time on parallel machines. In ACM
Symposium on Theory of Computing, pages 110119,
1997.
[11] C. A. Phillips, C. Stein, E. Torng, and J. Wein.
Optimal time-critical scheduling via resource
augmentation. In ACM Symposium on Theory of
Computing, pages 140149, 1997.
Appendix
Proof of Theorem 4.6.
Let M denote the set of machines
of class less than l. First observe that the processing
time incurred by
A on J
I
is at most twice of
|M | T (the
factor twice comes because there may be some jobs which
are dispatched during I but finish later -- there can be at
most one such job for a given class and a given machine).
So we will be done if we can show that F
O
J
I
is (
|M | T ).
Let V be the volume of jobs in J
I
which are done by
O
on machines of class l or more. If V
|M |T
l+1
, then we are
done because then the processing time incurred by
O on J
I
is at least V
l
. So we will assume in the rest of the proof
that V
|M |T
l+1
.
Let i be a machine of class less than l. We shall say that
i is good is i processes jobs from J
I
for at least T /4 units
of time during I in the optimal solution O. Otherwise we
say that i is bad. Let G denote the set of good machines. If
|G|
|M |
, then we are done again, because P
O
J
I
is at least
|G| T/4. Let B denote the set of bad machines.
So we can assume that the number of good machines is
at most 1/ fraction of the number of machines of class less
than l. Now consider a time t in the interval (t
b
+ T /2, t
e
).
Claim 6.1. At time t, at least
k
|M | volume of jobs from
J
I
is waiting in
O.
Proof. Let V
1
denote the volume of jobs from J
I
which
is done by
A during (t
b
, t). Let V
2
denote the volume of jobs
from J
I
which is done by
O on machines of class less than
l during (t
b
, t). Recall that for a machine i, s
i
denotes the
slowness of i.
Since a machine i of class l < l does not perform jobs
from J
I
for at most 6
k
l
amount of time during I, we see
that V
1
P
iM t-t
b
-6
k
ci
s
i
, where c
i
denotes the class
of i. Let us look at V
2
now. In
O all bad machines do not
process jobs from J
I
for at least 3T /4 units of time during I.
So they do not process jobs from J
I
for at least T /4 units of
time during (t
b
, t
b
+T /2). So V
2
P
iM (t-t
b
)
s
i
-P
iB T
4s
i
.
This shows that V
1
- V
2
P
iB T
4s
i
- P
iM 6
k
ci
s
i
.
For a bad machine i, T /4 - 6
k
c
i
T/8, since c
i
l 1
(assuming is large enough). So, we can see that this
difference is at least P
iB T
8s
i
- P
i /
B
6
k
. Since T
l
k
, we see that this difference is at least
T
l-1
(
|B|/8 - 6|G|),
which is at least
T |M |
10
l-1
, because we have assumed that
|B| is
larger than
|G| by a sufficiently high constant factor. Recall
that V is the volume of jobs in J
I
which is done by
O on
machines of class l or more. Clearly, the volume of jobs from
J
I
which is waiting at time t in O is at least V
1
- V
2
- V .
But V is at most
T |M |
l+1
. Hence the volume waiting at time
t is at least
T |M |
l
. This proves the lemma.
Since each job
in J
I
is of size at most
k
, we see that at least (
|M |) jobs
are waiting at time t. Summing over all values of t in the
range (t
b
+ T /2, t
e
) implies the theorem.
Proof of Lemma 4.13.
Consider an i, i
u+1
< i i
u
.
Then l
i
(t) l
i
u
(t), otherwise we should have another suffix
maximal index between i
u+1
and i
u
. So P
i
u+1
-1
i=i
u
c
i
(t)
m
<l
i
(t)
m
<l
iu
(t)
P
i
u
-1
i=i
u
c
i
(t)
2 m
<l
iu
(t)
c
iu
(t)
. So
we get P
i
c
i
(t)
m
<l
i
(t)
2 P
i
u
m
<l
iu
(t)
c
iu
(t)
.
Now we consider the sum P
i
P
ll
i
(t) P
O
l
(t
i+1
,t
i
)
l-1
. Fix an
i, i
u+1
< i i
u
. Using Claim 4.12, we see that P
O
l
(t
i+1
, t
i
)
m
l
l
i
(t)+1
c
i
(t)
. So we get
X
ll
i
(t)
P
O
l
(t
i+1
, t
i
)
l-1
X
ll
iu
(t)
P
O
l
(t
i+1
, t
i
)
l-1
+
l
iu
(t)-1
X
l=l
i
(t)
m
l
l
i
(t)+1
c
i
(t)
l-1
Now the second term on the right hand side above is at
most
2
m
<l
iu
(t)
c
i
(t)
. So we get
i
u+1-1
X
i=i
u
X
ll
i
(t)
P
O
l
(t
i+1
, t
i
)
l-1
X
ll
iu
(t)
P
O
l
(t
i
u+1
, t
i
u
)
l-1
+ 2
2
c
iu
(t)
m
<l
iu
(t)
,
because
c
i
(t)
scales down geometrically as i increases.
Finally note that P
i
u
c
iu
(t)
m
<l
iu
(t)
is at most twice of
P
i
u
c
iu
(t)
m
(l
iu-1
(t),l
iu
(t)-1)
, because
c
i
(t)
scale down
geometrically. This proves the lemma (using (2)).
738 | non-migratory algorithm;flow-time;average flow time;approximation algorithms;processing time;competitive ratio;related machines;poly-logarithmic factor;preemption;multiprocessor environment;scheduling;Scheduling |
138 | Modeling and Predicting Personal Information Dissemination Behavior | In this paper, we propose a new way to automatically model and predict human behavior of receiving and disseminating information by analyzing the contact and content of personal communications. A personal profile, called CommunityNet, is established for each individual based on a novel algorithm incorporating contact, content, and time information simultaneously. It can be used for personal social capital management. Clusters of CommunityNets provide a view of informal networks for organization management. Our new algorithm is developed based on the combination of dynamic algorithms in the social network field and the semantic content classification methods in the natural language processing and machine learning literatures. We tested CommunityNets on the Enron Email corpus and report experimental results including filtering, prediction, and recommendation capabilities. We show that the personal behavior and intention are somewhat predictable based on these models. For instance, "to whom a person is going to send a specific email" can be predicted by one's personal social network and content analysis. Experimental results show the prediction accuracy of the proposed adaptive algorithm is 58% better than the social network-based predictions, and is 75% better than an aggregated model based on Latent Dirichlet Allocation with social network enhancement. Two online demo systems we developed that allow interactive exploration of CommunityNet are also discussed. | INTRODUCTION
Working in the information age, the most important is not
what you know, but who you know [1]. A social network, the
graph of relationships and interactions within a group of
individuals, plays a fundamental role as a medium for the spread
of information, ideas, and influence. At the organizational level,
personal social networks are activated for recruitment, partnering,
and information access. At the individual level, people exploit
their networks to advance careers and gather information.
Informal network within formal organizations is a major, but
hard to acquire, factor affecting companies' performance.
Krackhardt [2] showed that companies with strong informal
networks perform five or six times better than those with weak
networks, especially on the long-term performance. Friend and
advice networks drive enterprise operations in a way that, if the
real organization structure does not match the informal networks,
then a company tends to fail [3]. Since Max Weber first studied
modern bureaucracy structures in the 1920s, decades of related
social scientific researches have been mainly relying on
questionnaires and interviews to understand individuals' thoughts
and behaviors for sensing informal networks. However, data
collection is time consuming and seldom provides timely,
continuous, and dynamic information. This is usually the biggest
hurdle in social studies.
Personal Social Network (PSN) could provide an organizing
principle for advanced user interfaces that offer information
management and communication services in a single integrated
system. One of the most pronounced examples is the
networking
study by Nardi et al. [4], who coined the term
intensional
networks to describe personal social networks. They presented a
visual model of user's PSN to organize personal communications
in terms of a social network of contacts. From this perspective,
many tools were built such as LinkedIn [5], Orkut [6], and
Friendster [7]. However, all of them only provide tools for
visually managing personal social networks. Users need to
manually input, update, and manage these networks. This results
in serious drawbacks. For instance, people may not be able to
invest necessary efforts in creating rich information, or they may
not keep the information up-to-date as their interests,
responsibilities, and network change. They need a way to organize
the relationship and remember who have the resources to help
them. We coin the terminology of managing these goals as
personal social capital management
1
.
In this paper, we develop a user-centric modeling
technology, which can dynamically describe and update a
person's personal social network with context-dependent and
temporal evolution information from personal communications.
We refer to the model as a CommunityNet. Senders and receivers,
time stamps, subject and content of emails contribute three key
components content semantics, temporal information, and
social relationship. We propose a novel Content-Time-Relation
(CTR) algorithm to capture dynamic and context-dependent
information in an unsupervised way. Based on the CommunityNet
models, many questions can be addressed by inference, prediction
and filtering. For instance, 1) Who are semantically related to
each other? 2) Who will be involved in a special topic? Who are
the important (central) people in this topic? 3) How does the
information flow? and 4) If we want to publicize a message,
whom should we inform?
Figure 1 shows the procedure of our proposed scheme. First,
topic detection and clustering is conducted on training emails in
order to define topic-communities. Then, for each individual,
CommunityNet is built based on the detected topics, the sender
and receiver information, and the time stamps. Afterwards, these
personal CommunityNets can be applied for inferring
organizational informal networks and predicting personal
behaviors to help users manage their social capitals. We
incorporate the following innovative steps:
1) Incorporate content analysis into social network in an
unsupervised way
2) Build a CommunityNet for each user to capture the context-dependent
, temporal evolutionary personal social network
based on email communication records
3) Analyze people's behaviors based on CommunityNet,
including predicting people's information sending and
receiving behaviors
4) Show the potential of using automatically acquired personal
social network for organization and personal social capital
management
Input: Emails
From: [email protected]
To: [email protected]
Subject: Re: timing of submitting
information to Risk Controls
Good memo - let me know if you see results.
......
Topic Detection,
Content Analysis
Topics
Meeting schedule
Agreement
California Energy
Game
Holiday celebration
CommunityNet
CommunityNet
Modeling
Applications
Recommendation system
Prediction,
Filtering
Input: Emails
From: [email protected]
To: [email protected]
Subject: Re: timing of submitting
information to Risk Controls
Good memo - let me know if you see results.
......
Topic Detection,
Content Analysis
Topics
Meeting schedule
Agreement
California Energy
Game
Holiday celebration
CommunityNet
CommunityNet
Modeling
Applications
Recommendation system
Prediction,
Filtering
Figure 1. An Overview of CommunityNet
We tested the CommunityNet model on the Enron email
corpus comprising the communication records of 154 Enron
employees dating from Jan. 1999 to Aug. 2002. The Enron email
dataset was originally made available to public by the Federal
Energy Regulatory Commission during the investigation [9]. It
was later collected and prepared by Melinda Gervasio at SRI for
the CALO (A Cognitive Assistant that Learns and Organizes)
project. William Cohen from CMU has put up the dataset on the
web for research purpose [9]. This version of the dataset contains
around 517,432 emails within 150 folders. We clean the data and
extract 154 users from those 150 folders with 166,653 unique
messages from 1999 to 2002. In the experiments, we use 16,873
intra-organizational emails which connect these 154 people.
The primary contributions of this paper are three-fold. First
we develop an algorithm incorporating content-time-relation
detection. Second, we generate an application model which
describes personal dynamic community network. Third, we show
how this model can be applied to organization and social capital
management. To the best of our knowledge, this is among the first
reported technologies on fusing research in the social network
analysis field and the content analysis field for information
management.
We propose the CTR algorithm and the
CommunityNet based on the Latent Dirichlet Allocation
algorithm.
In our experiments, we observed clear benefit of
discovering knowledge based on multi-modality information
rather than using only single type of data.
The rest of the paper is organized as follows. In Section 2,
we present an overview of related work. In Section 3, we present
our model. We discuss how to use CommunityNet to analyze
communities and individuals in section 4 and 5, respectively. In
Section 6, we show two demo systems for query, visualization
and contact recommendation. Finally, conclusions and future
work are addressed in Section 7.
RELATED WORK
To capture relationships between entities, social network has
been a subject of study for more than 50 years. An early sign of
the potential of social network was perhaps the classic paper by
Milgram [10] estimating that on average, every person in the
world is only six edges away from each other, if an edge between
i and j means "i knows j". Lately, introducing social network
analysis into information mining is becoming an important
research area. Schwartz and Wood [11] mined social relationships
from email logs by using a set of heuristic graph algorithms. The
Referral Web project [12] mined a social network from a wide
variety of publicly-available online information, and used it to
help individuals find experts who could answer their questions
based on geographical proximity. Flake et al. [13] used graph
algorithms to mine communities from the Web (defined as sets of
sites that have more links to each other than to non-members).
Tyler et al. [14] use a betweenness centrality algorithm for the
automatic identification of communities of practice from email
logs within an organization. The Google search engine [15] and
Kleinberg's HITS algorithm of finding hubs and authorities on the
Web [16] are also based on social network concepts. The success
of these approaches, and the discovery of widespread network
topologies with nontrivial properties, have led to a recent flurry of
research on applying link analysis for information mining.
A promising class of statistical models for expressing
structural properties of social networks is the class of Exponential
Random Graph Models (ERGMs) (or p* model) [17]. This
statistical model can represent structural properties that define
complicated dependence patterns that cannot be easily modeled
by deterministic models. Let Y denote a random graph on a set of
n nodes and let y denote a particular graph on those nodes. Then,
the probability of Y equals to y is
(
)
( )
(
)
( )
exp
T
s y
P Y
y
c
=
=
(1)
480
Industry/Government Track Paper
where
( )
s y
is a known vector of graph statistics (Density,
Reciprocity, Transitivity, etc) on y,
is a vector of coefficients to
model the influence of each statistics for the whole graph, T
means "transpose",
( )
c
is a normalization term to
satisfy
(
)
1
y
P Y
y
=
=
. The parameters
are estimated based on
the observed graph
obs
y
by maximum likelihood estimation.
All the research discussed above has focused on using static
properties in a network to represent the complex structure.
However, social networks evolve over time. Evolution property
has a great deal of influence; e.g., it affects the rate of information
diffusion, the ability to acquire and use information, and the
quality and accuracy of organizational decisions.
Dynamics of social networks have attracted many
researchers' attentions recently. Given a snapshot of a social
network, [19] tries to infer which new interactions among its
members are likely to occur in the near future. In [20], Kubica et
al.
are interested in tracking changes in large-scale data by
periodically creating an agglomerative clustering and examining
the evolution of clusters over time. Among the known dynamical
social networks in literature, Snijder's dynamic actor-oriented
social network [18] is one of the most successful algorithms.
Changes in the network are modeled as the stochastic result of
network effects (density, reciprocity, etc.). Evolution is modeled
by continuous-time Markov chains, whose parameters are
estimated by the Markov chain Monte Carlo procedures. In [21],
Handcock et al. proposed a curved ERGM model and applied it to
the new specifications of ERGMs This latest model uses nonlinear
parameters to represent structural properties of networks.
The above mentioned dynamic analyses show some success
in analyzing longitudinal stream data. However, most of them are
only based on pure network properties, without knowing what
people are talking about and why they have close relationships.
2.2 Content Analysis
In statistical Natural Language processing, one common way
of modeling the contributions of different topics to a document is
to treat each topic as a probability distribution over words,
viewing a document as a probability distribution over words, and
thus viewing a document as a probabilistic mixture over these
topics. Given T topics, the probability of the ith word in a given
document is formalized as:
( )
(
) (
)
1
|
T
i
i
i
i
j
P w
P w z
j P z
j
=
=
=
=
(2)
where
i
z
is a latent variable indicating the topic from which the
i
th word was drawn and
(
)
|
i
i
P w z
j
=
is the probability of the word
i
w
under the jth topic.
(
)
i
P z
j
=
gives the probability of choosing
a word from topics j in the current document, which varies across
different documents.
Hofmann [22] introduced the aspect model Probabilistic
Latent Semantic Analysis (PLSA), in which, topics are modeled
as multinomial distributions over words, and documents are
assumed to be generated by the activation of multiple topics. Blei
et al.
[23] proposed Latent Dirichlet Allocation (LDA) to address
the problems of PLSA that parameterization was susceptible to
overfitting and did not provide a straightforward way to infer
testing documents. A distribution over topics is sampled from a
Dirichlet distribution for each document. Each word is sampled
from a multinomial distribution over words specific to the
sampled topic. Following the notations in [24], in LDA, D
documents containing T topics expressed over W unique words,
we can represent ( | )
P w z
with a set of T multinomial
distributions
over the W words, such that
( )
( |
)
w
j
P w z
j
=
=
,
and P(z) with a set of D multinomial distribution
over the T
topics, such that for a word in document d,
( )
(
)
d
j
P z
j
=
=
.
Recently, the Author-Topic (AT) model [25] extends LDA to
include authorship information, trying to recognize which part of
the document is contributed by which co-author. In a recent
unpublished work, McCallum et al. [26] further extend the AT
model to the Author-Recipient-Topic model by regarding the
sender-receiver pair as an additional author variable for topic
classification. Their goal is role discovery, which is similar to one
of our goals as discussed in Sec. 4.1.2 without taking the temporal
nature of emails into consideration.
Using LDA,
and
are parameters that need to be estimated
by using sophisticated approximation either with variational
Bayes or expectation propagation. To solve this problem, Griffiths
and Steyvers [24] extended LDA by considering the posterior
distribution over the assignments of words to topics and showed
how Gibbs sampling could be applied to build models.
Specifically,
(
)
( )
( )
( )
( )
,
,
,
,
|
,
i
i
i
w
d
i j
i j
i
i
d
i j
i
n
n
P z
j z w
n
W
n
T
+
+
=
+
+
(3)
where
( )
i
n
is
a count that does not include the current assignment,
( )
w
j
n is the number of times word w has been assigned to topic j in
the vector of assignments z,
( )
d
j
n is the number of times a word
from document d has been assigned to topic j,
( )
j
n is a sum of
( )
w
j
n ,
( )
d
n is a sum of
( )
d
j
n . Further, one can estimate
( )
w
j
, the
probability of using word w in topic j, and
( )
d
j
, the probability of
topic j in document d as follows:
( )
( )
( )
^
w
w
j
j
j
n
n
W
+
=
+
(4)
( )
( )
( )
^
d
d
j
j
d
n
n
T
+
=
+
(5)
In [24], experiments show that topics can be recovered by
their algorithm and show meaningful aspects of the structure and
relationships between scientific papers.
Contextual, relational, and temporal information are three
key factors for current data mining and knowledge management
models. However, there are few papers addressing these three
components simultaneously. .In our recent paper, we built user
models to explicitly describe a person's expertise by a relational
and evolutionary graph representation called ExpertisetNet [27].
In this paper, we continue exploring this thread, and build a
CommunityNet model which incorporates these three components
together for data mining and knowledge management.
COMMUNITYNET
In this section, we first define terminologies. Then, we
propose a Content-Time-Relation (CTR) algorithm to build the
481
Industry/Government Track Paper
personal CommunityNet. We also specifically address the
prediction of the user's behaviors as a classification problem and
solve it based on the CommunityNet models.
3.1 Terminology
Definition 1. Topic-Community: A topic community is a group
of people who participate in one specific topic.
Definition 2: Personal Topic-Community Network (PTCN): A
personal topic-community network is a group of people directly
connected to one person about a specific topic.
Definition 3. Evolutionary Personal Social Network: An
evolutionary personal social network illustrates how a personal
social network changes over time.
Definition 4. Evolutionary Personal Topic-Community
Network: An evolutionary network illustrates how a person's
personal topic-community network changes over time.
Definition 5. Personal Social Network Information Flow: A
personal social network information flow illustrates how the
information flows over a person's personal social network to
other people's personal social networks
Definition 6: Personal Topic-Community Information Flow: A
personal Topic-CommunityNet information flow illustrates how
the information about one topic flows over a person's personal
social network to other people's personal social networks.
3.2 Personal Social Network
We build people's personal social networks by collecting
their communication records. The nodes of a network represent
whom this person contacts with. The weights of the links measure
the probabilities of the emails he sends to the other people: A
basic form of the probability that an user u sending email to a
recipient r is:
( )
number of times sends emails to
|
total number of emails sent out by
u
r
P r u
u
=
(6)
We build evolutionary personal social networks to explore the
dynamics and the evolution. The ERGM in Eq. (1) can be used to
replace Eq. (6) for probabilistic graph modeling. A big challenge
of automatically building evolutionary personal social network is
the evolutionary segmentation, which is to detect changes
between personal social network cohesive sections. Here we apply
the same algorithm as we proposed in [27]. For each personal
social network in one time period t, we use the exponential
random graph model [17] to estimate an underlying distribution to
describe the social network.
An ERGM is estimated from the data
in each temporal sliding window. With these operations, we
obtain a series of parameters which indicates the graph
configurations.
3.3 Content-Time-Relation Algorithm
We begin with email content, sender and receiver
information, and time stamps, and use these sources of knowledge
to create a joint probabilistic model. An observation is (u, r, d, w,
t) corresponds to an event of a user u sending to receivers r an
email d containing words w during a particular time period t.
Conceptually, users choose latent topics z, which in turn generate
receivers r, documents d, and their content words w during time
period t.
(
)
(
)
(
)
, | ,
, | ,
| ,
z
P u r d t
P u r z t P z d t
=
(7)
where
,
u r is a sender-receiver pair during time period t
.
,
u r
can be replaced by any variable to indicate the user's
behavior, as long as it is also assumed to be dependent on latent
topics of emails.
In order to model the PTCN, one challenge is how to detect
latent topics dynamically and at the same time track the emails
related to the old topics. This is a problem similar to topic
detection tracking [28]. We propose an incremental LDA (ILDA)
algorithm to solve it, in which the number of topics is
dynamically updated based on the Bayesian model selection
principle [24]. The procedures of the algorithm are illustrated as
follows:
Incremental Latent Dirichlet Allocation (ILDA) algorithm:
Input: Email streams with timestamp t
Output:
( )
,
w
j t
,
( )
,
d
j t
for different time period t
Steps:
1) Apply LDA on a data set with currently observed emails in a
time period t to generate latent topics
j
z and estimate
(
)
( )
0
0
,
| ,
w
j
j t
P w z t
=
and
(
)
( )
0
0
,
| ,
d
j
j t
P z d t
=
by equation (4)
and (5). The number of topics is determined by the Bayesian
model selection principle.
2) When new emails arrive during time period k, use Bayesian
model selection principle to determine the number of topics
and apply
(
)
(
)
( )
( )
,
1
,
|
, ,
|
,
i
i
d
i j
i
i
k
i
k
d
i
n
P z
j z w t
P w z
j t
n
T
+
=
=
+
to
estimate
(
)
| ,
k
P z d t ,
(
)
| ,
k
P w z t , and
(
)
| ,
k
P z w t .
3) Repeat step 2) until no data arrive.
Based on this ILDA algorithm, we propose a Content-Time-Relation
(CTR)
algorithm. It consists of two phases, the training
phase and the testing phase. In the training phase, emails as well
as the senders, receivers and time stamps are available.
(
)
| ,
old
P w z t
and
(
)
, | ,
old
P u r z t
are learnt from the observed
data. In the testing phase, we apply ILDA to learn
(
)
| ,
new
P z d t
.
Based on
(
)
, | ,
old
P u r z t
, which is learnt from the training
phase,
,
u r can be inferred. Again, ,
u r represents a sender-receiver
pair or any variable to indicate the user's behavior, as
long as it is dependent on the latent topics of emails.
Content-Time-Relation (CTR) algorithm:
1) Training
phase
Input: Old emails with content, sender and receiver information,
and time stamps
old
t
Output:
(
) (
)
(
)
| ,
, | ,
, and , | ,
old
old
old
P w z t
P z d t
P u r z t
Steps:
a) Apply Gibbs Sampling on the data according to equation (3).
b) Estimate
(
)
( )
,
| ,
old
w
j
old
j t
P w z t
=
and
(
)
( )
,
| ,
old
d
j
old
j t
P z d t
=
by
equation (4), and (5).
c) Estimate
(
)
(
)
(
)
(
)
(
)
, | ,
, | ,
| ,
, | ,
| ,
old
old
old
d
old
old
d
P u r z t
P u r d t
P d z t
P u r d t
P z d t
=
(8)
2) Testing
phase
Input: New emails with content and time stamps
new
t
482
Industry/Government Track Paper
Output:
(
)
(
)
(
)
, | ,
, | ,
,
and | ,
new
new
new
P u r d t
P w z t
P z d t
Steps:
a) Apply incremental LDA by Gibbs Sampling based on
(
)
(
)
( )
( )
,
,
|
, ,
,
|
i
i
d
i j
i
i
new
i
old
d
i
n
P z
j z w t
P w z
j t
n
T
+
=
=
+
to
estimate
(
)
| ,
j
new
P w z t
, and
(
)
| ,
new
P z d t
by equation (4) and
(5).
b) If the topics are within the training set, estimate
(
)
(
)
(
)
^
, | ,
, | ,
| ,
new
old
new
z
P u r d t
P u r z t
P z d t
=
, else if the
sender and receivers are within the training set, estimate
(
)
^
, | ,
new
P u r d t
by topic-independent social network
(
)
, |
old
P u r t
.
c) If there are new topics detected, update the model by
incorporating the new topics.
Inference, filtering, and prediction can be conducted based
on this model. For the CTR algorithm, sender variable u or
receiver variable r is fixed. For instance, if we are interested
in
(
)
| , ,
P r u d t , which is to answer a question of whom we should
send the message d to during the time period t. The answer will be
(
)
(
) (
)
(
)
(
)
/
^
argmax
| , ,
argmax
| ,
,
| , ,
| ,
| , ,
old
old
new
t
t
t
old
new
old
new
r
t
old
t
new
old
t
new
r
z
z
z
P r u d t
P r u z t
P z
u d t
P r u t
P z
u d t
=
+
(9)
where z
/
new
old
t
t
z
represents the new topics emerging during the
time period t. Another question is if we receive an email, who will
be possibly the sender?
(
)
(
) (
)
(
)
(
)
/
^
argmax
| , ,
argmax
| ,
,
| , ,
| ,
| , ,
old
old
old
t
t
t
old
new
old
new
u
t
old
t
new
old
t
new
u
z
z
z
P u r d t
P u r z t
P z
r d t
P u r t
P z
r d t
=
+
(10)
Eq. (9) and Eq. (10) integrate the PSN, content and temporal
analysis. Social network models such as ERGM in Eq. (1) or the
model in Sec. 3.2 can be applied to the
(
)
, | ,
P u r d t terms.
Figure 2 illustrates the CTR model and compares to the
LDA, AT and ART models. In CTR, the observed variables not
only include the words w in an email but also the sender u and the
timestamp on each email d.
3.4 Predictive Algorithms
For the sake of easier evaluation, we focus on prediction
schemes in details. Specifically, we address the problem of
predicting receivers and senders of emails as a classification
problem, in which we train classifiers to predict the senders or
receivers and other behavior patterns given the observed people's
communication records. The trained classifier represents a
function in the form of:
:
(
, )
f Comm t i t
Y
(11)
where
(
)
,
Comm t i t
is
the observed communication record
during the interval from time t-i to t, Y is a set of receivers or
senders or other user behavior patterns to be discriminated, and
the value of
(
)
(
)
,
f Comm t i t
is
the classifier prediction
regarding which user behavior patterns gave rise to the observed
communication records. The classifier is trained by providing the
history of the communication records with known user behaviors.
3.4.1 Using Personal Social Network Model
We aggregate all the communication records in the history of
a given user, and build his/her personal social network. We
choose those people with the highest communication frequency
with this person as the prediction result.
3.4.2 Using LDA combined with PSN Model
We use the LDA model and combine it with PSN to do the
prediction, which is referred as LDA-PSN in the paper. Latent
topics are detected by applying original LDA on the training set
and LDA is used for inference in testing data without
incorporating new topics when time passes by. The possible
senders and receivers when new emails arrive,
(
)
, | ,
new
P u r d t
is
estimated as
(
)
(
)
(
)
^
, | ,
, | ,
| ,
new
old
new
z
P u r d t
P u r z t
P z d t
=
.
People are ranked by this probability as the prediction results.
3.4.3 Using CTR Model
People tend to send emails to different group of people under
different topics during different time periods. This is the
assumption we made for our predictive model based on CTR.
LDA
AT
ART
u
u
LDA
AT
ART
u
u
LDA
AT
ART
u
u
: observations
A
N
D
T
u
z
w
r
CTR
S
Tm
t
: observations
A
N
D
T
u
z
w
r
CTR
S
Tm
t
Figure 2. The graphical model for the CTR model comparing to LDA, AT and ART models, where u: sender, t: time, r: receivers, w:
words, z: latent topics, S: social network, D: number of emails, N: number of words in one email, T: number of topics, Tm: size of the
time sliding window, A: number of authors, ,
and
are the parameters we want to estimate with the hyperparameters
, ,
483
Industry/Government Track Paper
(
)
, | ,
new
P u r d t
is estimated by applying the CTR model
discussed in section 3.3. The prediction results are people with
highest scores calculated by equation (9) and (10).
3.4.4 Using an Adaptive CTR Model
Both the personal social network and the CTR model ignore
a key piece of information from communication records -- the
dynamical nature of emails. Both personal social network and
Topic-Community dynamically change and evolve. Only based on
the training data which are collected in history will not get the
optimal performance for the prediction task. Adaptive prediction
by updating the model with newest user behavior information is
necessary. We apply several strategies for the adaptive prediction.
The first strategy is aggregative updating the model by adding
new user behavior information including the senders and receivers
into the model. Then the model becomes:
(
)
(
)
(
)
(
)
(
)
1
/
^ , | ,
, | ,
| ,
, |
| ,
i
t
t
i
old
K
i
k old
k
i
old
t
i
k
z z
P u r d t
P u r z t P z d t
P u r t P z d t
=
=
+
(12)
where K is the number of old topics. Here, we always use the data
from
old
t
, including
0
t to
1
i
t
to
predict the user behavior
during
i
t .
In the second strategy, we assume the correlation between
current data and the previous data decays over time. The more
recent data are more important. Thus, a sliding window of size n
is used to choose the data for building the prediction model, in
which the prediction is only dependent on the recent data, with the
influence of old data ignored. Here in equation (12),
old
t consists
of
i n
t
to
1
i
t
.
3.5 CommunityNet Model
We then build a CommunityNet model based on the CTR
algorithm. The CommunityNet model, which refers to the personal
Topic-Community Network, draws upon the strengths of the topic
model and the social network as well as the dynamic model, using
a topic-based representation to model the content of the
document, the interests of the users, the correlation of the users
and the receivers and all these relationship changing over time.
For prediction, CommunityNet incorporates the adaptive CTR
model as described in Section 3.4.4.
COMMUNITY ANALYSIS
The first part of our analysis focuses on identifying clusters
of topics, and the senders and receivers who participated in those
topics. First, we analyze the topics detected from the Enron
Corpus. Then, we study the topic-community patterns.
4.1 Topic Analysis
In the experiment, we applied Bayesian model selection [24]
to choose the number of topics. In the Enron intra-organization
emails, there are 26,178
word-terms involved after we apply stop-words
removal and stemming, We computed
(
)
|
P w T
for T values
of 30, 50, 70, 100, 110, 150 topics and chose T = 100 with the
maximum value of
(
)
(
)
log
|
P w T
for the experiment.
4.1.1 Topic Distribution
After topic clustering based on words, for each document, we
have P(z|d), which indicates how likely each document belongs to
each topic. By summing up this probability for all the documents,
we get the topic distribution of how likely each topic occurs in
this corpus. We define this summed likelihood as "Popularity" of
the topic in the dataset. From this topic distribution, we can see
that some topics are hot - people frequently communicate with
each other about them, while some others are cold, with only few
emails related to them. Table 1 illustrates the top 5 topics in Enron
corpus. We can see that most of them are talking about regular
issues in the company like meeting, deal, and document. Table 2
illustrates the bottom 5 topics in Enron corpus. Most of them are
specific and sensitive topics, like "Stock" or "Market". People
may feel less comfortable to talk about them broadly.
Table 1. Hot Topics
meeting deal Petroleum
Texas
document
meeting
plan
conference
balance
presentation
discussion
deal
desk
book
bill
group
explore
Petroleum
research
dear
photo
Enron
station
Houston
Texas
Enron
north
America
street
letter
draft
attach
comment
review
mark
Table 2. Cold Topics
Trade stock network
Project
Market
trade
London
bank
name
Mexico
conserve
Stock
earn
company
share
price
new
network
world
user
save
secure
system
Court
state
India
server
project
govern
call
market
week
trade
description
respond
4.1.2 Topic Trend Analysis
To sense the trend of the topics over time, we calculate the
topic popularity for year 2000 and 2001, and calculate the
correlation coefficients of these two series. For some topics, the
trends over years are similar. Figure 3(a) illustrates the trends for
two topics which have largest correlation coefficients between
two years. Topic 45, which is talking about a schedule issue,
reaches a peak during June to September. For topic 19, it is
talking about a meeting issue. The trend repeats year to year.
Figure 3(b) illustrates the trend of Topic "California Power"
over 2000 to 2001. We can see that it reaches a peak from the end
of year 2000 to the beginning of year 2001. From the timeline of
Enron [29], we found that "California Energy Crisis" occurred at
exactly this time period. Among the key people related to this
topic, Jeff Dasovich was an Enron government relations
executive. His boss, James Steffes was Vice President of
Government Affairs. Richard Schapiro was Vice President of
Regulatory Affairs. Richard Sanders was Vice President and
Assistant General Counsel. Steven Kean was Executive Vice
President and Chief of Staff. Vincent Kaminski was a Ph.D.
economist and Head of Research for Enron Corp. Mary Han was
a lawyer at Enron's West Coast trading hub. From the timeline,
we found all these people except Vince were very active in this
event. We will further analyze their roles in Section 5.
484
Industry/Government Track Paper
Topic Trend Comparison
0
0.005
0.01
0.015
0.02
0.025
0.03
Jan
Mar
May
Jul
Sep
Nov
Po
p
u
l
a
r
i
t
y
Topic45(y2000)
Topic45(y2001)
Topic19(y2000)
Topic19(y2001)
(a) Trends of two yearly repeating events.
Topic Analysis for Topic 61
0
0.002
0.004
0.006
0.008
0.01
0.012
0.014
0.016
0.018
Jan-00 Apr-00 Jul-00
Oct-00 Jan-01 Apr-01 Jul-01 Oct-01
P
opul
ar
i
t
y
Keywords
with
(
)
|
P w z
power 0.089361 California 0.088160 electrical
0.087345 price 0.055940 energy 0.048817 generator
0.035345 market 0.033314 until 0.030681
Key people
with
( )
|
P u z
Jeff_Dasovich 0.249863 James_Steffes 0.139212
Richard_Shapiro 0.096179 Mary_Hain 0.078131
Richard_Sanders 0.052866 Steven_Kean 0.044745
Vince_Kaminski 0.035953
(b) The trend of "California Power" and most related keywords
and people.
Figure 3. Topic trends
4.2 Predicting Community Patterns
We assume that, people communicate with certain people
only under certain few topics. People in the same community
under a topic would share the information. Thus, if there is
something new about one topic, people in that topic-community
will most likely get the information and propagate it to others in
the community. Finally, many people in the community will get
the information.
To evaluate our assumption and answer the question of who
will be possibly involved in an observed email, we collect the
ground truth about who are the senders and receivers for the
emails and use the CTR algorithm to
infer
(
)
, | ,
j
new
P u r z t
by
(
)
, | ,
j
old
P u r z t
. We partitioned the data into
training set and testing set. We tried two strategies for this
experiment. First is to randomly partition the data into a training
set with 8465 messages and a testing set with 8408 messages.
Prediction accuracy is calculated by comparing the inference
results and the ground truth (i.e., receiver-sender pair of that
email). We found that 96.8446% people stick in the old topics
they are familiar with. The second strategy is to partition data by
time: emails before 1/31/2000 as the training data (8011) and
after that as the testing data (8862). We found 89.2757% of the
people keep their old topics. Both results are quite promising. It is
found that people really stick in old topics they are familiar with.
INDIVIDUAL ANALYSIS
In this section, we evaluate the performance of
CommunityNet. First, we show how people's roles in an event can
be inferred by CommunityNet. Then, we show the predicting
capability of the proposed model in experiments.
5.1 Role Discovery
People with specific roles at company hierarchy behave
specifically on specific topics. Here we show it is possible to
infer people's roles by using CommunityNet.
In Section 4.1.2, we show there are some key people
involved in "California Energy Crisis". In reality, Dasovich,
Steffes, Schapiro, Sanders, and Kean, were in charge of
government affairs. Their roles were to "solve the problem". Mary
Hain was a lawyer during the worst of the crisis and attended
meetings with key insiders. We calculated the correlation
coefficients of the trends of these people and the overall trend of
this topic. Jeff Dasovich got 0.7965, James Steffes got 0.6501,
Mary Hain got 0.5994, Richard Shapiro got 0.5604, Steven Kean
got 0.3585 (all among the 10 highest correlation scores among
154 people), and Richard Sanders got 0.2745 (ranked 19), while
Vince Kaminski had correlation coefficient of -0.4617 (Figure 4).
We can see that all the key people except Vince Kaminski have
strong correlation with the overall trend of "California Energy
Crisis". From their positions, we can see that all of them were sort
of politicians while Vince Kaminski is a researcher. Thus, it is
clear to see the difference of their roles in this topic.
0
0.1
0.2
0.3
0.4
0.5
Jan-00 May-00 Sep-00 Jan-01 May-01 Sep-01
Po
p
u
l
a
r
i
t
y
Overall trend
Jeff_Dasovich
Vince_Kaminski
Figure 4. Personal topic trend comparison on "California Power"
5.2 Predicting Receivers
Here we want to address the problem of whether it is
possible to infer who will possibly be the receivers by a person's
own historic communication records and the content of the email-to
-send. One possible application is to help people organize
personal social capital. For instance, if a user has some
information to send or a question to ask, CommunityNet can
recommend the right persons to send the info or get the answer.
We conduct experiments by partitioning the dataset into a
training set with the emails from 1999 to 2000, and a testing set
with the emails from 2001 to 2002. The testing set is further
partitioned into sub-sets with emails from one month as a subset.
With this, we have 15 testing sets. (We exclude the emails after
March 2002 because the total number of emails after that is only
78.) One issue we want to mention is that the number of people
from 1999 to 2000 is 138, while from 2001 to 2002 is 154. In this
study, we test each email in the training set by using its content,
485
Industry/Government Track Paper
sender, and time as prior information to predict the receiver,
which is compared to the real receiver of that email.
In Figure 5, we illustrate the prediction performance by
comparing the CTR algorithm, PSN, and the aggregated LDA-PSN
model. The result shows that CTR beats PSN by 10% on
accuracy. The aggregated LDA-PSN model performs even worse
than PSN, because of the inaccurate clustering results. The
performance gain is 21%. Moreover, intuitively, personal contacts
evolve over time. Models built at a specific time should have
decreasing predicting capability over time. In this figure, we
obtain strong evidence of this hypothesis by observing that the
performance of these models monotonically decays. This also
implies our models well match the practice.
0
0.2
0.4
0.6
0.8
1
Jan-01
Apr-01
Jul-01
Oct-01
Jan-02
A
c
cu
r
a
cy
by PSN
by CTR
by LDA-PSN
(a) Accuracy based on the top 5 most likely people
0
0.2
0.4
0.6
0.8
1
Jan-01
Apr-01
Jul-01
Oct-01
Jan-02
A
c
cu
r
a
cy
by PSN
by CTR
by LDA-PSN
(b) Accuracy based on the top 10 most likely people
Figure 5. Prediction Accuracy comparisons. Accuracy is
measured by testing whether the "real" receiver is among the
prediction list of the top 5 or 10 most likely people
5.3 Inferring Senders
We test whether it is possible to infer who will possibly be
the senders given a person's CommunityNet and the content of the
email. One possible application is to exclude spam emails or
detect identification forgery. Figure 6 illustrates the prediction
result, which also shows the prediction accuracy decays over time.
0
0.2
0.4
0.6
0.8
1
Jan-01 Mar-01 May-01 Jul-01 Sep-01 Nov-01
A
ccu
r
a
c
y
top5
top3
Figure 6. Predicting senders given receiver and content
5.4 Adaptive Prediction
We observed the prediction performance decays over time
from the results of 5.2 and 5.3, which reflects the changes of the
nature of email streams. Here we apply adaptive prediction
algorithms we mentioned in 3.4.3, in which we incrementally and
adaptively estimate statistical parameters of the model by
gradually forgetting out-of-state statistics.
0
0.2
0.4
0.6
0.8
1
Jan-01 Mar-01 May-01 Jul-01 Sep-01 Nov-01
Ac
c
u
r
a
c
y
Adaptive CT R(T op 5)
Adaptive CT R(T op 10)
CT R(T op5)
CT R(T op10)
(a). Comparison between Adaptive CTR and CTR models
0
5
10
15
20
25
30
35
40
45
Jan-01 Mar-01 May-01 Jul-01
Sep-01 Nov-01
Adaptive CTR(aggregative)
Adaptive CTR(6 months)
CTR
LDA-PSN
PSN
(b) Comparison of algorithms using Breese evaluation metrics
Figure 7. Performance evaluation for adaptive prediction
algorithm and overall comparison
Figure 7 (a) illustrates the performance of the Adaptive CTR
algorithm and compares it to the CTR algorithm.
For the data far
away from the training data,
the improvement is more than 30%.
And, if we compare it to the PSN and LDA-PSN algorithms, the
performance gains are 58% and 75%, respectively. Evaluation by
this accuracy metric tells us how related the top people ranked in
the prediction results are. To understand the overall performance
of the ranked prediction results, we apply the evaluation metric
proposed by Breese [30], and illustrate the overall comparison in
Figure 7(b). This metric is an aggregation of the accuracy
measurements in various top-n retrievals in the ranked list.
Among all predictive algorithms, adaptive CTR models perform
best and PSN performs worst. In adaptive CTR models,
estimating from recent data of six months beats aggregative
updating the model from all the data from the history.
COMMUNITYNET APPLICATIONS
In this section, we show two application systems we built
based on the CommunityNet. The first one is a visualization and
query tool to demonstrate informal networks incorporation. The
second one is a receiver recommendation tool which can be used
in popular email systems. These demos can be accessed from
http://nansen.ee.washington.edu/CommunityNet/.
6.1 Sensing Informal Networks
6.1.1 Personal Social Network
Figure 8 illustrates the interface of a visualization and query
system of CommunityNet.
The distance of nodes represents the
closeness (measured by the communication frequencies) of a
person to the center person. Users can click on the node to link to
486
Industry/Government Track Paper
the CommunityNet of another person. This system can show
personal social networks, which includes all the people a user
contacts with during a certain time period. For instance, Figure
8(a) illustrates the personal social network of Vice President John
Arnold from January 1999 to December 2000. During this period,
there were 22 people he sent emails to, regardless what they were
talking about. An evolutionary personal social network is
illustrated in Figure 8(b), in which we show people's personal
social network changes over time. From Jan. 1999 to Dec. 2000,
no new contact was added to John's PSN. However, people's
relationship changed in 2000. A Personal Social Network
Information Flow is illustrated in Figure 8(c), in which we show
how the information flows through the network (here we illustrate
the information in two levels.)
(a) Personal Social Network of John Arnold
(b1) Jan-`99 to Dec-`99 (b2) Jan-`00 to Jun-`00 (b3) Jul-`00 to Dec-`00
(b) Evolutionary Personal Social Network
(c) Personal Social Network Information Flow with two-level
personal social network of John Arnold
Figure 8. Personal social networks of John Arnold
6.1.2 Personal Topic-Community Network
Personal topic-community network can show whom this user
will contact with under a certain topic.
On retrieval, keywords are
required for inferring the related topics. Figure 9 illustrates
several personal topic-community networks for John Arnold.
First, we type in "Christmas" as the keyword. CommunityNet
infers it as "holiday celebration" and shows the four people John
contacted with about this topic. About "Stock", we find John
talked with five people on "Stock Market" and "Company Share"
from Jan. 1999 to Dec. 2000. Personal Topic-Community network
can be depicted by the system, too.
Figure 9. Personal Topic-Community Networks when we type in
"Christmas" and "Stock"
6.2 Personal Social Capital Management Receiver
Recommendation Demo
When a user has some questions, he/she may want to know
whom to ask how to find an expert and who may tell him/her
more details because of their close relationships. In our second
demo, we show a CommunityNet application which addresses this
problem. This tool can be incorporated with general email
systems to help users organize their personal social capitals. First,
after a user login a webmail system, he can type in content and/or
subject then click on the "Show Content Topic-Community". This
tool shall recommend appropriate people to send this email to,
based on the learned personal social network or personal topic-community
. The distances of nodes represent the closeness of the
people to the user. Users can click on the node to select an
appropriate person to send email to. If the center node is clicked,
then a sphere grows to represent his ties to a group of experts.
Click on "Mail To", then the people in the sphere will be included
in the sender list.
In the examples in Figure 10, we log in as Jeff Dasovich. He
can ask his closest friends whenever he has questions or wants to
disseminate information. If he wants to inform or get informed on
487
Industry/Government Track Paper
"Government" related topics, the system will suggest him to send
emails to Steffes, Allen, Hain, or Scott. The topics are inferred by
matching the terms from the Subject as well as the content of the
email. He can also type in "Can you tell me the current stock
price?" as the email content. This system will detect "Stock
Market" as the most relevant topic. Based on Dasovich's
CommunityNet, it shows three possible contacts. He then chooses
appropriate contact(s).
(a) Receiver recommendation for "Government"
(b) Receiver recommendation for "Can you tell me the current
stock price?"
Figure 10.Receiver recommendation demo system
CONCLUSIONS AND FUTURE WORK
In this paper, we propose a new way to automatically model
and predict human behavior of receiving and disseminating
information. We establish personal CommunityNet profiles based
on a novel Content-Time-Relation algorithm, which incorporates
contact, content, and time information simultaneously from
personal communication.
CommunityNet can model and predict
the community behavior as well as personal behavior. Many
interesting results are explored, such as finding the most
important employees in events, predicting senders and receivers
of emails, etc. Our experiments show that this multi-modality
algorithm performs better than both the social network-based
predictions and the content-based predictions. Ongoing work
includes studying the response time of each individual to emails
from different people to further analyze user's behavior, and also
incorporating nonparametric Bayesian methods such as
hierarchical LDA with contact and time information.
ACKNOWLEDGMENTS
We would like to thank D. Blei, T. Griffiths, Yi Wu and
anonymous reviewers for valuable discussions and comments.
This work was supported by funds from NEC Labs America.
REFERENCES
[1]
B. A. Nardi, S. Whittaker, and H. Schwarz. "It's not what you know, it's who
you know: work in the information age," First Mon., 5, 2000.
[2]
D. Krackhardt, "Panel on Informal Networks within Formal Organizations,"
XXV Intl. Social Network Conf., Feb. 2005.
[3]
D. Krackhardt and M. Kilduff, "Structure, culture and Simmelian ties in
entrepreneurial firms," Social Networks, Vol. 24, 2002.
[4]
B. Nardi, S. Whittaker, E. Isaacs, M. Creech, J. Johnson, and J. Hainsworth,
"ContactMap: Integrating Communication and Information Through
Visualizing Personal Social Networks," Com. of the Association for
Computing Machinery. April, 2002.
[5]
https://www.linkedin.com/home?trk=logo.
[6]
https://www.orkut.com/Login.aspx.
[7]
http://www.friendster.com/.
[8]
N. Lin, "Social Capital," Cambridge Univ. Press, 2001.
[9]
W. Cohen. http://www-2.cs.cmu.edu/~enron/.
[10]
S. Milgram. "The Small World Problem," Psychology Today, pp 60-67, May
1967.
[11]
M. Schwartz and D. Wood, "Discovering Shared Interests Among People
Using Graph Analysis", Comm. ACM, v. 36, Aug. 1993.
[12]
H. Kautz, B. Selman, and M. Shah. "Referral Web: Combining social
networks and collaborative filtering," Comm. ACM, March 1997.
[13]
G. W. Flake, S. Lawrence, C. Lee Giles, and F. M. Coetzee. "Self-organization
and identification of Web communities," IEEE Computer, 35(3):6670, March
2002.
[14]
J. Tyler, D. Wilkinson, and B. A. Huberman. "Email as spectroscopy:
Automated Discovery of Community Structure Within Organizations," Intl.
Conf. on Communities and Technologies., 2003.
[15]
L. Page, S. Brin, R. Motwani and T. Winograd. "The PageRank Citation
Ranking: Bringing Order to the Web," Stanford Digital Libraries Working
Paper, 1998.
[16]
J. Kleinberg. "Authoritative sources in a hyperlinked environment," In Proc.
9th ACM-SIAM Symposium on Discrete Algorithms, 1998.
[17]
S. Wasserman, and P. E. Pattison, "Logit models and logistic regression for
social networks: I. An introduction to Markov graphs and p*", Psychometrika,
61: 401 425, 1996.
[18]
T. A.B. Snijders. "Models for Longitudinal Network Data," Chapter 11 in
Models and methods in social network analysis, New York: Cambridge
University Press, 2004.
[19]
D. L.-Nowell and J. Kleinberg, "The Link Prediction Problem for Social
Networks," In Proceedings of the 12th Intl. Conf. on Information and
Knowledge Management, 2003.
[20]
J. Kubica, A. Moore, J. Schneider, and Y. Yang. "Stochastic Link and Group
Detection," In Proceedings of the 2002 AAAI Conference. Edmonton, Alberta,
798-804, 2002.
[21]
M. Handcock and D. Hunter, "Curved Exponential Family Models for
Networks," XXV Intl. Social Network Conf., Feb. 2005.
[22]
T. Hofmann, "Probabilistic Latent Semantic Analysis,"
Proc. of the Conf. on Uncertainty in Artificial Intelligence, 1999.
[23]
D. Blei, A. Ng, and M. Jordan, "Latent Dirichlet allocation," Journal of
Machine Learning Research, 3:993-1022, January 2003.
[24]
T. Griffiths and M. Steyvers, "Finding Scientific Topics," Proc. of the
National Academy of Sciences, 5228-5235, 2004.
[25]
M. R.-Zvi, T. Griffiths, M. Steyvers and P. Smyth, "The Author-Topic Model
for Authors and Documents",
Proc. of the Conference on Uncertainty in Artificial Intelligence volume 21,
2004.
[26]
A. McCallum, A. Corrada-Emmanuel, and X. Wang, "The Author-Recipient-Topic
Model for Topic and Role Discovery in Social Networks: Experiments
with Enron and Academic Email," Technical Report UM-CS-2004-096, 2004.
[27]
X. Song, B. L. Tseng, C.-Y. Lin, and M.-T. Sun, "ExpertiseNet: Relational and
Evolutionary Expert Modeling," 10th Intl. Conf. on User Modeling,
Edinburgh, UK, July 24-30, 2005.
[28]
J. Allan, R. Papka, and V. Lavrenko. "On-line New Event Detection and
Tracking," Proc. of 21st ACM SIGIR, pp.37-45, August 1998.
[29]
http://en.wikipedia.org/wiki/Timeline_of_the_Enron_scandal.
[30]
J. Breese, D. Heckerman, and C. Kadie. "Empirical analysis of predictive
algorithms for collaborative filtering," Conf. on Uncertainty in Artificial
Intelligence, Madison,WI, July 1998.
488
Industry/Government Track Paper | user behavior modeling;information dissemination;personal information management |
139 | Modeling behavioral design patterns of concurrent objects | Object-oriented software development practices are being rapidly adopted within increasingly complex systems, including reactive, real-time and concurrent system applications. While data modeling is performed very well under current object-oriented development practices, behavioral modeling necessary to capture critical information in real-time, reactive, and concurrent systems is often lacking. Addressing this deficiency, we offer an approach for modeling and analyzing concurrent object-oriented software designs through the use of behavioral design patterns, allowing us to map stereotyped UML objects to colored Petri net (CPN) representations in the form of reusable templates. The resulting CPNs are then used to model and analyze behavioral properties of the software architecture, applying the results of the analysis to the original software design. | Introduction
Object-oriented software development practices are being
rapidly adopted within increasingly complex systems, including
reactive, real-time and concurrent system applications. In
practice, though, object-oriented software design techniques are
still predominantly focused on the creation of static class
models. Dynamic architectural models capturing the overall
behavioral properties of the software system are often
constructed using ad hoc techniques with little consideration
given to the resulting performance or reliability implications
until the project reaches implementation. Efforts to analyze
behavioral issues of these architectures occur through
opportunistic rather than systematic approaches and are
inherently cumbersome, unreliable, and unrepeatable.
One means of improving the behavioral modeling capabilities of
object-oriented architecture designs is to integrate formalisms
with the object-oriented specifications. Using this technique,
object-oriented design artifacts are captured in a format such as
the Unified Modeling Language (UML) [1], which is intuitive to
the software architect. The native object-oriented design is then
augmented by integrating an underlying formal representation
capable of providing the necessary analytical tools. The
particular method used in this research [2] is to integrate colored
Petri nets (CPNs) [3] with object-oriented architecture designs
captured in terms of UML communication diagrams.
Specifically, this paper will present a method to systematically
translate a UML software architecture design into an underlying
CPN model using a set of pre-defined CPN templates based on a
set of object behavioral roles. These behavioral roles are based
on the object structuring criteria found in the COMET method
[4], but are not dependent on any given method and are
applicable across application domains. This paper will also
demonstrate some of the analytical benefits provided by
constructing a CPN representation of the UML software
architecture. After a survey of related research, Section 2
descries the concept of behavioral design pattern templates for
modeling concurrent objects. Section 3 discusses how we
construct an overall CPN model of the concurrent software
architecture by interconnecting the individual behavioral design
pattern templates. Section 4 describes the validation of the
approach.
1.1
Related Research
There are many existing works dealing with the use of Petri nets
for describing software behavior. As they relate to this paper,
the existing works can be broadly categorized into the modeling
of software code and the modeling of software designs. In this
research, the focus is on improving reliability of object-oriented
software designs rather than delaying detection to the software
code. In terms of object-oriented design, the related Petri net
research can be categorized as new development methodologies
[5-8]; object-oriented extensions to Petri nets [9-12]; and the
integration of Petri nets with existing object-oriented
methodologies [13-20]. Since one of the goals of this research
effort is to provide a method that requires no additional tools or
language constructs beyond those currently available for the
UML and CPN definitions, this approach [2,21-25] falls into the
last category of integrating Petri nets with existing
methodologies. The main features that distinguish this approach
from other related works are a focus on the concurrent software
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
ICSE'06, May 2028, 2006, Shanghai, China.
Copyright 2006 ACM 1-59593-085-X/06/0005...$5.00.
202
architecture design and the use of consistent, reusable CPN
templates to model the behavior of concurrent objects and their
interactions. This paper also extends our more recent works
[25] by specifically focusing on the behavioral design patterns
of individual concurrent objects and applying these patterns to
construct an underlying representation of the concurrent
software design architecture.
Modeling Behavioral Design Patterns
To model concurrent object behavioral design patterns with
CPNs, our approach starts with a concurrent software
architecture model captured in UML. For the construction of
this architecture model, we identify a set of behavioral design
patterns used to categorize the objects along with a set of
specification requirements necessary to correctly model the
concurrent behavior with the underlying CPN model. Each of
the identified behavioral design patterns then has a
corresponding template, represented as a CPN segment, which is
paired with the UML object and is instantiated to capture
specific behavioral characteristics based on the object
specifications. The following sections describe the object
architecture definition along with the concept of behavioral
pattern templates for modeling concurrent objects. Section 3
will then discuss how we construct an overall CPN model of the
concurrent object architecture by connecting the individual
behavioral pattern templates.
2.1
Concurrent Object Modeling
Our approach uses a UML communication diagram to capture
the concurrent software architecture. Depending on the desired
level of modeling, this architecture model can be constructed for
an entire software system or for one or more individual
subsystems. This communication diagram contains a collection
of concurrent (active) and passive objects along with the
message communication that occurs between the objects. Using
our approach, objects within the concurrent software
architecture are organized using the notion of components and
connectors. Under this paradigm, concurrent objects are treated
as components that can be connected through passive message
communication objects and entity objects. In keeping with the
COMET object structuring criteria, each object is assigned a
UML stereotype to indicate its behavioral design pattern.
Objects are broadly divided into application objects, which
perform the work, and connector objects, which provide the
means of communicating between application objects. For
application objects, we use six stereotyped behavioral design
patterns as illustrated in Figure 1: interface, entity, coordinator,
state-dependent, timer, and algorithm. Additionally, connector
objects can take the roles of: queue, buffer, or buffer-with-response
, corresponding to asynchronous, synchronous, and
return messages. These patterns are not intended to be an
exhaustive list, but rather are intended to represent sufficient
variety to model concurrent systems across a wide range of
domains while also allowing these patterns to be extended as
necessary for future applications.
The identification of stereotyped behavioral roles allows us to
select a specific CPN template to model each object (further
described in Section 3.2). These behavioral stereotypes are
generic across applications, so we also capture specific
application information using the following tagged values:
Execution Type. Each object must be declared as either
passive or concurrent and for concurrent objects, further
specified to be asynchronous or periodic.
IO Mapping. Input-output message pairings must be
specified for each object
Communication Type. Indicate whether message
communication occurs through asynchronous or synchronous
means.
Activation Time. The period of activation must be specified
for each periodic concurrent object.
Processing Time. Estimated processing times for completing
an execution cycle should be assigned to each object if
timing is to be accounted for in performance analysis.
Operation Type. Indicate whether operations on entity
objects perform "reader" or "writer" functionality.
Statechart. For each state-dependent object, a UML
statechart is used to capture the state behavior for that object.
A detailed discussion of how the statechart is translated into
the CPN model is provided in Pettit and Gomaa [24].
Figure 1. Stereotype Hierarchy for Application Objects
2.2
Defining Behavioral Pattern Templates
The basis for our approach to modeling concurrent object
behavior lies in the notion of a behavioral design pattern (BDP)
template, which represents concurrent objects according to their
role along with associated message communication constructs.
For each BDP template, we employ a self-contained CPN
segment that, through its places, transitions, and tokens, models
a given stereotyped behavioral pattern. Each template is generic
in the sense that it provides us with the basic behavioral pattern
and component connections for the stereotyped object but does
not contain any application-specific information. The
connections provided by each template are consistent across the
set of templates and allow concurrent objects to be connected to
passive objects (entities or message communication) in any
order.
We provide a BDP template for each object type identified in
the previous section. Since each of these templates captures a
generic behavioral design pattern, when a template is assigned
to a specific object, we then augment that template with the
information captured in the tagged values for the object. For the
resulting CPN representation, this affects the color properties of
the tokens (e.g. to represent specific messages) and the rules for
processing tokens (e.g. to account for periodic processing or
special algorithms). The following sections describe a subset of
our behavioral templates for both concurrent object components
and their connectors.
application
interface
entity
control
algorithm
coordinator
timer
state dependent
application
interface
entity
control
algorithm
coordinator
timer
state dependent
application
interface
entity
control
algorithm
coordinator
timer
state dependent
application
interface
entity
control
algorithm
coordinator
timer
state dependent
203
2.2.1
Asynchronous Interface Object Template
Consider the case of an asynchronous, input-only interface
object. The template for this behavioral design pattern is given
in Figure 2.
This template represents a concurrent object, that is, an object
that executes its own thread of control concurrently with other
objects in the software system. While this template models
relatively simple behavior (wait for input; process input; wait for
next input), it features characteristics found throughout the
concurrent object templates. First, to model the thread of
control within a concurrent object, a control token (CTRL) is
assigned to each concurrent object. For this template, a control
token is initially present in the Ready place. Thus, this template
is initialized in a state whereby it is ready to receive an input at
the ProcessInput transition. As an input arrives (and given that
the control token is in the Ready place), ProcessInput is allowed
to fire, simulating the processing of the external input and the
behavior of the asynchronous input interface object.
ProcessInput consumes both a token representing the external
input as well as the control token representing the executable
thread of control. An output arc from ProcessInput uses a
function, processInput (Input_event) to generate the appropriate
token representing an internal message passed to another object
within the system. The exact behavior of the processInput
function (as with any arc-inscription functions throughout the
templates) is determined from the object specification when a
template is instantiated for a specific object. Finally, to
complete the behavioral pattern for this template, the control
token is passed to the MessageSent place and eventually back to
the Ready place, enabling the template to process the next input.
2.2.2
Periodic Algorithm Object Template
The asynchronous interface template addressed asynchronous
behavior for a concurrent object, where the object is activated
on demand by the receipt of a message or an external stimulus
(as in the case of the interface example). For periodic behavior,
where an object is activated on a regular periodic interval,
consider the template for a concurrent periodic algorithm object
given in Figure 3.
Algorithm objects are internal concurrent objects that
encapsulate algorithms, which may be activated or deactivated
on demand. In the case of the periodic algorithm object, once
the algorithm is enabled, it awakens on its activation period,
performs the desired algorithmic task, and then returns to a sleep
state until the next activation period.
Looking at the periodic algorithm template from Figure 3, you
should notice that, like the previous concurrent object template,
there is Ready place with a control token that indicates when the
object is ready to start its next processing cycle and models the
thread of execution. This is common across all concurrent
object templates. To model the ability for an algorithm object to
be enabled or disabled, the input interface to this template
occurs through the Enable_Alg and Disable_Alg transitions.
(Note that we maintain the use of transitions as the interface
points for all concurrent objects.) Thus, in addition to the
control token being present on the Ready place, an Enable token
must also be present on the Alg_Enabled place in order for the
Perform_Alg transition to be enabled and subsequently fired.
The actual behavior performed by the algorithm is captured by
decomposing the Perform_Alg transition.
The resulting decomposition uses one or more place-transition
paths to model the behavior performed within the algorithm.
The information necessary to derive the CPN algorithm model
may be contained in the UML class specification for the
algorithm object or, for more complex algorithms, may be
captured in supporting UML artifacts such as the activity
diagram. Multiple algorithms may be encapsulated within the
same algorithm object. In these cases, the enable/disable
transitions, enabled place, and processing transition are repeated
for each encapsulated algorithm. However, there will only ever
be one control token and ready place in a single concurrent
object as our approach does not allow for multi-threaded
concurrent objects.
Finally, to capture the periodic nature of this template, a Sleep
place along with Wakeup and Timeout transitions have been
added to the basic asynchronous object template. This place-transition
pair will be common to all periodic templates. In this
case, the periodic algorithm starts in the Sleep place rather than
Ready. After the desired sleep time (indicating the activation
period of the object) has elapsed, the Wakeup transition is
enabled and, when fired, removes the CTRL token from the
Sleep place and places it in the Ready place. This now enables
the template to perform any enabled algorithms. If one or more
algorithms are enabled, the template proceeds in the same
manner as the previous asynchronous algorithm template.
However, if no algorithms are enabled when the template wakes
up, the Timeout transition will fire and return the Control token
to the Sleep place and wait for the next period of activation.
2.2.3
Entity Object Template
In contrast to concurrent objects, passive objects do not execute
their own thread of control and must rely on operation calls
from a concurrent object. Using our approach, the entity objects
from Figure 1 are passive objects. The purpose of an entity
object is to store persistent data. Entities provide operations to
access and manipulate the data stored within the object. These
operations provide the interface to the entity object. To account
for the possibility of multiple concurrent objects accessing a
single entity object, our approach stipulates that each operation
be tagged as having "read" or "write" access and for the object
to be tagged with "mutually exclusive" or "multiple-reader/single
-writer" rules for access control. This allows us to
apply the appropriate template with the desired mutual exclusion
protection for the encapsulated object attributes. The behavioral
design pattern template representing an entity object with
mutually exclusive access is shown in Figure 4.
In this template, attributes are modeled with a CPN place
containing tokens representing the attribute values. The
underlying functionality of each operation is captured in an
"idmOperation" transition that can be further decomposed as
necessary to implement more complex functions. When
instantiated for a specific entity object, the "idm" tag is replaced
with a specific identifier for each operation. Finally, the
interface to each operation is provided by a pair of CPN places
one place for the operation call and another for the return.
Collectively, these places form the interface to the entity object.
As opposed to concurrent objects, all passive objects and
message connectors will use CPN places for their interface,
allowing concurrent objects to be connected through their
transition interfaces. Thus, for performing an operation call, a
204
concurrent object places its control token and any necessary
parameter tokens on the calling place and then waits for the
control token to be returned along with any additional operation
results at the call return place. Recall that entity objects do not
have their own thread of control, thus they become part of the
calling object's thread of control for the duration of the
operation call.
2.2.4
Message Communication Templates
Finally, in addition to application object templates, our method
also provides templates for connector objects representing
message communication. These connectors may represent
asynchronous or synchronous message communication between
two concurrent objects.
Figure 2. Asynchronous Input-Only Interface Object: (a) UML (2.0); (b) CPN Template
Figure 3. Periodic Algorithm Template: (a) UML; (b) CPN Representation
{Execution = async;
IO = input
Process Time = <process time>
}
asyncInput
Interface
<<interface>>
external
InputSource
<<external input device>>
inputEvent
asyncMsg
To internal
connector
object
(a)
(b)
{Execution = async;
IO = input
Process Time = <process time>
}
asyncInput
Interface
<<interface>>
external
InputSource
<<external input device>>
inputEvent
asyncMsg
To internal
connector
object
(a)
(b)
{Execution = periodic;
Activation Time = <sleep time>
Process Time = <process time>
}
periodic
Algorithm
Object
<<algorithm>>
enable
(a)
(b)
periodic
Algorithm
Object
{Execution = periodic;
Activation Time = <sleep time>
Process Time = <process time>
}
periodic
Algorithm
Object
<<algorithm>>
enable
(a)
(b)
periodic
Algorithm
Object
205
Figure 4. Passive Entity Template: (a) UML; (b) CPN Representation
Consider the message buffer template shown in Figure 5.
Notice that, as with passive entity objects, the interface to
connector objects always occurs through a place rather than a
transition, thus allowing concurrent object interfaces to be
linked with connector interfaces while still enforcing the Petri
net connection rules of only allowing arcs to occur between
transitions and places. The message buffer template models
synchronous message communication between two concurrent
objects. Thus, only one message may be passed through the
buffer at a time and both the producer (sender) and consumer
(receiver) are blocked until the message communication has
completed. The behavior of synchronous message
communication is modeled through this template by first having
the producer wait until the buffer is free as indicated by the
presence of a "free" token in the buffer. The producer then
places a message token in the buffer and removes the free token,
indicating that the buffer is in use. Conversely, the consumer
waits for a message token to appear in the buffer. After
retrieving the message token, the consumer sets the buffer once
again to free and places a token in the "Return" place, indicating
to the producer that the communication has completed.
Asynchronous message connector templates continue to employ
places for their interfaces. However, asynchronous message
communication, which involves the potential for queuing of
messages, is more involved than the simple synchronous
message buffer and must therefore add a transition to handle this
behavior. The corresponding template is shown in Figure 6.
With asynchronous communication the sender is not blocked
awaiting acknowledgement that the sent message has been
received and a message queue is allowed to form for the object
receiving the asynchronous messages. In this template, the
ManageQueue transition is decomposed into a subnet that
implements the FIFO placement and retrieval of messages in the
queue [26]. To send an asynchronous message, a concurrent
object places a message token on the Enqueue place. The
subnet under ManageQueue would then add this message token
to the tail of the queue. Another concurrent object receiving the
asynchronous message would wait for a message token to be
available in the Dequeue place (representing the head of the
queue). It would then remove the message token from Dequeue
and signal DequeueComplete in a similar manner to the
operation calls previously described for entities. This signals
the queue that a message token has been removed from the head
of the queue and that the remaining messages need to be
advanced.
Figure 5. Synchronous Message Buffer Connector Template:
(a) UML; (b) CPN Representation
(a)
(b)
anActiveObject
anotherActive
Object
anEntityObject
read()
write()
<<entity>>
{Access Control = mutually-exclusive}
(a)
(b)
anActiveObject
anotherActive
Object
anEntityObject
read()
write()
<<entity>>
{Access Control = mutually-exclusive}
(a)
(b)
producer
consumer
data
(a)
(b)
producer
consumer
data
206
Figure 6. Asynchronous Message Queue Template
Constructing CPN Models from UML
Up to this point, we have just discussed individual CPN
templates being used to model behavioral design patterns of
concurrent objects, passive entity objects, and message
communication mechanisms. This section presents our method
for constructing a CPN model of the concurrent software
architecture by applying and interconnecting these templates.
The basic construction process consists of the following steps:
1.
Construct a concurrent software architecture model using a
UML communication diagram to show all concurrent and
passive objects participating in the (sub) system to be
analyzed along with their message communication.
2.
Begin constructing the CPN model by first developing a
context-level CPN model showing the system as a single
CPN substitution (hierarchically structured) transition and
the external interfaces as CPN places. Using a series of
hierarchically structured transitions allows us to work with
the CPN representation at varying levels of abstraction, from
a completely black-box view, a concurrent software
architecture view (in the next step), or within an individual
object as desired for the level of analysis being applied to the
model.
3.
Decompose the system transition of the CPN context-level
model, populating an architecture-level model with the
appropriate CPN templates representing the objects from the
concurrent software architecture.
4.
Elaborate each instance of CPN template to account for the
specific behavioral properties of the object it models.
5.
Connect the templates, forming a connected graph between
concurrent object templates and passive entity objects or
message communication mechanisms.
To illustrate the application of this approach, consider a partial
example from the well-known Cruise Control System [4]. This
example was chosen for this paper as it requires little
explanation for the UML model and allows us to focus on the
use of behavioral design pattern templates and the CPN
representations. Figure 7 provides a partial communication
diagram of the Cruise Control System concurrent software
architecture.
To begin, focus on the input events being provided by the
Cruise Control Lever. (We will return to the brake and engine
inputs later in this section.) Cruise control lever events enter the
system via a concurrent interface object that sends an
asynchronous message to the state dependent control object to
process the requests based on rules defined in a corresponding
statechart. Based on the state of the CruiseControl object,
commands are given to a concurrent periodic algorithm object
enabling it to compare speed values from two passive entity
objects and determine the correct throttle values, which are then
passed on to the periodic output interface, ThrottleInterface.
Given this concurrent software architecture, the second step in
our process would construct the context-level CPN model
shown in Figure 8. At this level, we see the system as a black-box
represented as a single transition, "CruiseControlSystem".
External input and output interfaces for the cruise control lever,
brake, and engine devices are represented as places. The
purpose of this context-level CPN model is to provide a central
starting point for our modeling and analysis. By structuring the
CPN model in this way, we can analyze the system as a black
box, dealing only with external stimuli and observed results
(corresponding to the tokens stored in these places) or we can
use hierarchical decomposition to gain access to the individual
object behavioral design pattern templates (and their detailed
CPN implementation) by systematically decomposing the
hierarchically structured transitions (indicated with the HS tag).
In the third step, the CruiseControlSystem transition from the
context-level model is decomposed into an architecture-level
model populated with the appropriate CPN behavioral design
pattern template for each of the cruise control objects. Given
the architecture design from Figure 7 (and continuing to ignore
AutoSensors for the moment), we would need to instantiate two
interface templates, two entity templates, one state
dependent control template, and one algorithm template. We
would also need to use queue and buffer templates for the
asynchronous and synchronous message communication
respectively.
Once the appropriate templates have been assigned to each
object, the fourth step in the process is to elaborate each
template to model a specific object. To illustrate, consider
CruiseControlLeverInterface. This object is an asynchronous
input-only interface that accepts events from the cruise control
lever device and, based on the input event, generates the
appropriate messages for the cruise control request queue.
Applying the asynchronous input interface template from Figure
2, we arrive at the elaborated CPN segment for
CruiseControlLeverInterface shown in Figure 9.
To elaborate the template for the CruiseControlLeverInterface,
the place and transition names from the basic template have
been appended with the object ID (1) for the specific object.
The control token for this model has also been set to the specific
control token for the CruiseControlLeverInterface object
(CTRL1) and the time region for the PostProcessing_1
transition has been set to "@+100" to reflect the Process Time
tagged value. The CruiseControlLeverInterface CPN
representation is then connected to the software architecture by
establishing an input arc from the CruiseControlLeverDevice
place, representing the external input from the device, and an
output arc to the Enqueue place, modeling the asynchronous
message communication identified in the UML software
architecture. Token types (colors) are then specifically created
to represent the incoming event and outgoing messages. Finally,
the processInput1() function is elaborated to generate the
appropriate asynchronous message based on an incoming lever
event. This elaboration process is similar for all templates.
(a)
(b)
anActiveObject
anotherActive
Object
data
(a)
(b)
anActiveObject
anotherActive
Object
data
207
Figure 7. Partial Concurrent Software Architecture for Cruise Control
Figure 8. CPN Context-Level Model for Cruise Control
Figure 9. Asynchronous Input-Only Interface Template
Applied to CruiseControlLeverInterface
Once all templates have been elaborated, our fifth and final step
connects the templates to form a connected graph of the
concurrent software architecture. The entire CPN architecture
model for cruise control is too large for inclusion in this paper.
However, Figure 10 illustrates the component connections
between the CruiseControlLeverInterface and the CruiseControl
templates using an asynchronous message queue connector. As
can be seen from this figure, the two concurrent object templates
communicate via the queue connector by establishing arcs
between the interface transitions of the concurrent objects and
the interface places of the queue connector. This component
connection method applies to the entire software architecture
using our approach of allowing concurrent objects to be
connected to either passive entity objects or to a message
communication connector.
To further illustrate the component-based approach used for
constructing these CPN let us now consider expanding the
model to include input from the brake and engine devices. In
addition to the cruise control lever inputs, Figure 7 also shows
brake and engine status messages arriving from the respective
devices. These status messages are handled by the AutoSensors
periodic interface object and are passed to CruiseControl via an
asynchronous message through the same cruise control request
queue already being used by CruiseControlLeverInterface.
Using our component-based modeling approach, the
AutoSensors object can be added to our CPN model by simply
instantiating a CPN representation of the periodic input interface
behavioral design pattern template using the specified
characteristics for AutoSensors and then connecting it to the
existing queue template. The resulting CPN model is given in
Figure 11.
The addition of AutoSensors also illustrates another capability
of the interface template. Whereas the cruise control lever is an
asynchronous device, providing interrupts to
CruiseControlLeverInterface, the brake and engine devices are
passive devices that must be polled for their status. In Figure
11, every time AutoSensors is activated, it retrieves the status
token from the brake and engine device places. After checking
the status, the token is immediately returned to the device
places, modeling persistence of device status information that
can be polled as necessary. The remainder of the AutoSensors
template should be familiar, being constructed of the standard
Ready and ProcessInput place-transition pair for interface object
templates (Section 2.2.1) and the Sleep and Wakeup place-transition
pair included for periodic objects (Section 2.2.2).
As demonstrated in this section, the primary benefits of our
component-based modeling approach are that connections can
easily be added or modified as the architecture evolves or to
provide rapid "what-if" modeling and analysis.
select(),
clear()
entity
:DesiredSpeed
entity
:CurrentSpeed
ccCommand
throttleValue
throttleOutput
to throttle
read()
read()
cruiseControlLeverInput
cruiseControlRequest
{Execution = async;
IO = input
Process Time = 100ms
}
{Execution = async;
Process Time = 200ms
}
{Execution = periodic;
Activation Time = 100ms
Process Time = 50ms
}
{Execution = periodic;
IO = output
Process Time = 20ms
Activation Time = 100ms
}
cruiseControlRequest
brakeStatus
engineStatus
{Execution = periodic;
IO = input
Activation Time = 100ms
Process Time = 20ms
}
CruiseControl
LeverInterface
<<interface>>
CruiseControl
<<state dependent>>
Speed
Adjustment
<<algorithm>>
Throttle
Interface
<<interface>>
AutoSensors
<<interface>>
CruiseControl
LeverDevice
EngineDevice
BrakeDevice
<<external input device>>
<<external input device>>
<<external input device>>
select(),
clear()
entity
:DesiredSpeed
entity
:CurrentSpeed
ccCommand
throttleValue
throttleOutput
to throttle
read()
read()
cruiseControlLeverInput
cruiseControlRequest
{Execution = async;
IO = input
Process Time = 100ms
}
{Execution = async;
Process Time = 200ms
}
{Execution = periodic;
Activation Time = 100ms
Process Time = 50ms
}
{Execution = periodic;
IO = output
Process Time = 20ms
Activation Time = 100ms
}
cruiseControlRequest
brakeStatus
engineStatus
{Execution = periodic;
IO = input
Activation Time = 100ms
Process Time = 20ms
}
CruiseControl
LeverInterface
<<interface>>
CruiseControl
<<state dependent>>
Speed
Adjustment
<<algorithm>>
Throttle
Interface
<<interface>>
AutoSensors
<<interface>>
CruiseControl
LeverDevice
EngineDevice
BrakeDevice
<<external input device>>
<<external input device>>
<<external input device>>
208
Figure 10 Connecting CruiseControlLeverInterface and CruiseControl via Asynchronous Communication
Figure 11. Addition of AutoSensors to the CPN Architecture
Furthermore, by maintaining the integrity between a CPN
template and the object it represents, modeling and analysis
results can readily be applied to the original UML software
architecture model. Thus, while from a pure CPN perspective,
our CPNs could be further optimized, we feel that it is of greater
benefit to maintain a component-based architecture that closely
represents the structure of our original UML design artifacts.
Validation
The validation of our approach was in three parts. First, there
was the issue of whether our behavioral stereotypes and
corresponding templates could be applied across domains and
projects. This was demonstrated by successfully applying our
process to two case studies, the cruise control system (a portion
of which was shown in the previous sections) and the signal
generator system [2]. Secondly, we performed validation to
determine if the resulting CPN models provided a correct model
of the concurrent software architecture. This was necessary to
validate that our approach would result in an accurate
representation of the original architecture and was by far the
most tedious part of validation, as it required manual inspection
and unit testing of each object and its corresponding CPN
template representation for the two case studies. Finally, after
determining that our template approach satisfied the modeling
requirements for both case studies, we then sought to
demonstrate the analytical capabilities gained from using CPNs
to model concurrent software architectures. The behavioral
analysis addresses both the functional behavior of the concurrent
architecture as well as its performance, as described next. The
detailed analytical results for both case studies are provided in
[2].
4.1
Validating Functional Behavior
For functional analysis, the simulation capabilities of the
DesignCPN tool are used to execute the model over a set of test
cases. These test cases may be black-box tests in which we are
only monitoring the context-level model in terms of input events
and output results or they may be white-box tests in which we
analyze one or more individual object representations. In our
approach, black box test cases were derived from use cases
while white box test cases were derived from object interactions,
object specifications, and statecharts. In each of these cases, the
209
appropriate inputs for each test case were provided by placing
tokens on the CPN places representing the external actors in the
context model. The CPN model was then executed in the
simulator and observed at the desired points to determine if the
correct output was generated or if the correct logical paths were
chosen.
Again, consider the cruise control system. Figure 12 illustrates
a black-box simulation in which the driver has selected
"Accelerate" from the cruise control lever (with the engine being
on and the brake being released). Figure 12(a) shows the state
of the system before the simulation run and Figure 12(b)
illustrates the results of accelerating, namely a value being sent
to the throttle. This form of simulation may be applied to as low
or as high of a level of abstraction as desired in order to gain
visibility into the desired behavior of the architecture. For
example, one could choose to simply conduct black box testing
by placing input tokens on actor places, executing the
simulation, and then observing the resulting token values on
output actor places. Alternatively, if a more detailed
investigation is desired, the engineer may navigate the CPN
hierarchical construction and observe such characteristics as the
behavior of state changes within a state dependent object's CPN
representation. A detailed analysis of this state-dependent
behavior is provided in [24].
Figure 12. Example Cruise Control Black-Box Simulation
4.2
Validating Performance
In addition to simulation capabilities, the DesignCPN [27] tool
used in this effort also has a very powerful performance tool
[28] that can be employed to analyze performance aspects of the
concurrent software architecture. This tool can be used to
analyze such things as queue backlogs, system throughput, and
end-to-end timing characteristics. As an example of the latter,
we conducted a test to monitor the cruise control system
response times to commands being input from the cruise control
lever. To conduct this analysis, commands were issued to the
cruise control system while the system was in a simulated state
of operation with a speed of 60 miles per hour (100 kph). The
performance tool was used to monitor changes in the throttle
output and compare the time at an observed output change to the
time the original command was issued.
The results from this analysis are shown in Figure 13. From this
figure, we can see that all cruise control commands complete in
less than one second (1000ms) and most complete in less than
500ms. Detailed performance requirements were not provided
for our cruise control case study. However, if this cruise control
system was an actual production system, an engineer could
compare the analysis results against documented performance
requirements to determine if the system in fact satisfies the
necessary performance criteria. By being able to conduct this
form of analysis from the concurrent software design, an
engineer can both improve the reliability of the software
architecture at the design level and correct problems prior to
implementation.
Figure 13. Cruise Control End-to-End Timing Analysis
Conclusions and Future Research
The long-term goal of this research effort is to provide an
automated means of translating a UML concurrent software
architecture design into an underlying CPN representation that
can then be used to conduct behavioral analysis with results
communicated in terms of the original UML model. To date, we
have developed a method for systematically translating a UML
software architecture into a CPN representation. This method
employs reusable templates that model the behavior of a set of
objects according to their stereotyped behavioral roles. Each
template provides a consistent interface that allows templates to
be interconnected as components of a larger system, thus
creating the overall CPN representation. The resulting CPN
model enables the analysis of both the functional and
performance behavior of the concurrently executing objects. As
the CPN representation mirrors the structure of the concurrent
software architecture, the results can be readily applied to the
original UML model.
Future research in this area will need to investigate approaches
to facilitate the automated translation from a UML model into a
CPN model that can be read by a tool such as DesignCPN.
Additional research also needs to be conducted to investigate the
scalability of this approach to larger systems, including
distributed applications and providing behavioral templates for
the COMET distributed components [4]. Finally, the use of
state space analysis should be investigated further. Most of the
analysis conducted with this research effort has focused on the
use of simulations for functional analysis and on the
performance tool for performance analysis. State space analysis
Cruise Control End-To-End Timing Performance
0
200
400
600
800
1000
1200
0
5000
10000
15000
20000
25000
30000
Elapsed Time (ms)
C
o
m
m
a
nd C
o
m
p
l
e
t
i
on
Ti
m
e
(
m
s
)
(a)
(b)
1`"BrakeOff"
1`"Accel"
1`"Engine
On"
1
1
1
1
1
1
1`"BrakeOff"
1`50
1`"Engine
On"
210
could also be used to further refine deadlock detection as well as
to analyze system-wide state changes.
References
[1]
J. Rumbaugh, I. Jacobson, and G. Booch, The Unified
Modeling Language Reference Manual. 2
nd
Edition.
Addison-Wesley, 2005.
[2]
R. G. Pettit, Analyzing Dynamic Behavior of Concurrent
Object-Oriented Software Designs, Ph.D., School of IT&E,
George Mason University, 2003.
[3]
K. Jensen, Coloured Petri Nets: Basic Concepts, Analysis
Methods, and Practical Use, vol. I-III. Berlin, Germany:
Springer-Verlag, 1997.
[4]
H. Gomaa, Designing Concurrent, Distributed, and Real-Time
Applications with UML, Addison-Wesley, 2000.
[5]
M. Baldassari, G. Bruno, and A. Castella, "PROTOB: an
Object-Oriented CASE Tool for Modeling and Prototyping
Distributed Systems," Software-Practice & Experience,
v.21, pp. 823-44, 1991.
[6]
B. Mikolajczak and C. A. Sefranek, "Integrating Object
Oriented Design with Concurrency Using Petri Nets,"
IEEE International Conference on Systems, Man and
Cybernetics, Piscataway, NJ, USA, 2001.
[7]
R. Aihua, "An Integrated Development Environment for
Concurrent Software Developing Based on Object Oriented
Petri Nets," Fourth International Conference/Exhibition on
High Performance Computing in the Asia-Pacific Region.,
Los Alamitos, CA, USA, 2000.
[8]
X. He and Y. Ding, "Object Orientation in Hierarchical
Predicate Transition Nets," Concurrent Object-Oriented
Programming and Petri Nets. Advances in Petri Nets,
Berlin: Springer-Verlag, 2001, pp. 196-215.
[9]
O. Biberstein, D. Buchs, and N. Guelfi, "Object-Oriented
Nets with Algebraic Specifications: The CO-OPN/2
Formalism," Concurrent Object-Oriented Programming
and Petri Nets. Advances in Petri Nets, Berlin: Springer-Verlag
, 2001, pp. 73-130.
[10]
S. Chachkov and D. Buchs, "From Formal Specifications
to Ready-to-Use Software Components: The Concurrent
Object Oriented Petri Net Approach," Second International
Conference on Application of Concurrency to System
Design, Los Alamitos, CA, USA, 2001.
[11]
A. Camurri, P. Franchi, and M. Vitale, "Extending High-Level
Petri Nets for Object-Oriented Design," IEEE
International Conference on Systems, Man and
Cybernetics, New York, NY, USA, 1992.
[12]
J. E. Hong and D. H. Bae, "Software Modeling and
Analysis Using a Hierarchical Object-Oriented Petri Net,"
Information Sciences, v.130, pp. 133-64, 2000.
[13]
D. Azzopardi and D. J. Holding, "Petri Nets and OMT for
Modeling and Analysis of DEDS," Control Engineering
Practices, v.5, pp. 1407-1415, 1997.
[14]
C. Lakos, "Object Oriented Modeling With Object Petri
Nets," Concurrent Object-Oriented Programming and
Petri Nets. Advances in Petri Nets, Berlin: Springer-Verlag,
2001, pp. 1-37.
[15]
C. Maier and D. Moldt, "Object Coloured Petri Nets- A
Formal Technique for Object Oriented Modelling,"
Concurrent Object-Oriented Programming and Petri Nets.
Advances in Petri Nets, Berlin: Springer-Verlag, 2001, pp.
406-27.
[16]
J. A. Saldhana, S. M. Shatz, and H. Zhaoxia,
"Formalization of Object Behavior and Interactions from
UML Models," International Journal of Software
Engineering & Knowledge Engineering, v.11, pp. 643-73,
2001.
[17]
L. Baresi and M. Pezze, "On Formalizing UML with High-Level
Petri Nets," Concurrent Object-Oriented
Programming and Petri Nets. Advances in Petri Nets,
Berlin: Springer-Verlag, 2001, pp. 276-304.
[18]
K. M. Hansen, "Towards a Coloured Petri Net Profile for
the Unified Modeling" Centre for Object Technology,
Aarhus, Denmark, Technical Report COT/2-52-V0.1
(DRAFT), 2001.
[19]
J. B. Jrgensen, "Coloured Petri Nets in UML-Based
Software Development - Designing Middleware for
Pervasive Healthcare," CPN '02, Aarhus, Denmark, 2002.
[20]
B. Bordbar, L. Giacomini, and D. J. Holding, "UML and
Petri Nets for Design and Analysis of Distributed Systems,"
International Conference on Control Applications,
Anchorage, Alaska, USA, 2000.
[21]
R. G. Pettit and H. Gomaa, "Integrating Petri Nets with
Design Methods for Concurrent and Real-Time Systems,"
Real Time Applications Workshop, Montreal, Canada,
1996.
[22]
R. G. Pettit, "Modeling Object-Oriented Behavior Using
Petri Nets," OOPSLA Workshop on Behavioral
Specification, 1999.
[23]
R. G. Pettit and H. Gomaa, "Validation of Dynamic
Behavior in UML Using Colored Petri Nets," UML 2000,
York, England, 2000.
[24]
R. G. Pettit and H. Gomaa, "Modeling State-Dependent
Objects Using Colored Petri Nets," CPN 01 Workshop on
Modeling of Objects, Components, and Agents, Aarhus,
Denmark, 2001.
[25]
R.G. Pettit and H. Gomaa, "Modeling Behavioral Patterns
of Concurrent Software Architectures Using Petri Nets."
Working IEEE/IFIP Conference on Software Architectures,
Oslo, Norway, 2004.
[26]
R. David and H. Alla, "Petri Nets for Modeling of Dynamic
Systems: A Survey." Automatica v.30(2). Pp. 175-202.
1994.
[27]
K. Jensen, "DesignCPN," 4.0 ed. Aarhus, Denmark:
University of Aarhus, 1999.
[28]
B. Lindstrom and L. Wells, "Design/CPN Performance
Tool Manual," University of Aarhus, Aarhus, Denmark
September 1999.
211 | Software Architecture;Behavioral Design Patterns;Colored Petri Nets;COMET |
14 | A New Approach to Intranet Search Based on Information Extraction | This paper is concerned with `intranet search'. By intranet search, we mean searching for information on an intranet within an organization. We have found that search needs on an intranet can be categorized into types, through an analysis of survey results and an analysis of search log data. The types include searching for definitions, persons, experts, and homepages. Traditional information retrieval only focuses on search of relevant documents, but not on search of special types of information. We propose a new approach to intranet search in which we search for information in each of the special types, in addition to the traditional relevance search. Information extraction technologies can play key roles in such kind of `search by type' approach, because we must first extract from the documents the necessary information in each type. We have developed an intranet search system called `Information Desk'. In the system, we try to address the most important types of search first - finding term definitions, homepages of groups or topics, employees' personal information and experts on topics. For each type of search, we use information extraction technologies to extract, fuse, and summarize information in advance. The system is in operation on the intranet of Microsoft and receives accesses from about 500 employees per month. Feedbacks from users and system logs show that users consider the approach useful and the system can really help people to find information. This paper describes the architecture, features, component technologies, and evaluation results of the system. | INTRODUCTION
Internet search has made significant progress in recent years. In
contrast, intranet search does not seem to be so successful. The
IDC white paper entitled "The high cost of not finding
information" [13] reports that information workers spend from
15% to 35% of their work time on searching for information and
40% of information workers complain that they cannot find the
information they need to do their jobs on their company intranets.
Many commercial systems [35, 36, 37, 38, 39] have been
developed for intranet search. However, most of them view
intranet search as a problem of conventional relevance search. In
relevance search, when a user types a query, the system returns a
list of ranked documents with the most relevant documents on the
top.
Relevance search can only serve average needs well. It cannot,
however, help users to find information in a specific type, e.g.,
definitions of a term and experts on a topic. The characteristic of
intranet search does not seem to be sufficiently leveraged in the
commercial systems.
In this paper, we try to address intranet search in a novel approach.
We assume that the needs of information access on intranets can
be categorized into searches for information in different types. An
analysis on search log data on the intranet of Microsoft and an
analysis on the results of a survey conducted at Microsoft have
verified the correctness of the assumption.
Our proposal then is to take a strategy of `divide-and-conquer'.
We first figure out the most important types of search, e.g.,
definition search, expert search. For each type, we employ
information extraction technologies to extract, fuse, and
summarize search results in advance. Finally, we combine all the
types of searches together, including the traditional relevance
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
CIKM'05, October 31-November 5, 2005, Bremen, Germany.
Copyright 2005 ACM 1-59593-140-6/05/0010...$5.00.
460
search, in a unified system. In this paper, we refer to the approach
as `search by type'. Search by type can also be viewed as a
simplified version of Question Answering, adapted to intranet.
The advantage of the new search approach lies in that it can help
people find the types of information which relevance search
cannot easily find. The approach is particularly reasonable on
intranets, because in such space users are information workers and
search needs are business oriented.
We have developed a system based on the approach, which is
called `Information Desk'. Information Desk can help users to
find term definitions, homepages of groups or topics, employees'
personal information and experts on topics, on their company
intranets.
The system has been put into practical use since November 24
th
,
2004. Each month, about 500 Microsoft employees make access
to the system. Both the results of an analysis on a survey and the
results of an analysis on system log show that the features of
definition search and homepage search are really helpful. The
results also show that search by type is necessary at enterprise.
RELATED WORK
The needs on search on intranets are huge. It is estimated that
intranets at enterprises have tens or even hundreds of times larger
data collections (both structured and unstructured) than internet.
As explained above, however, many users are not satisfied with
the current intranet search systems. How to help people access
information on intranet is a big challenge in information retrieval.
Much effort has been made recently on solutions both in industry
and in academia.
Many commercial systems [35, 36, 37, 38, 39] dedicated to
intranet search have been developed. Most of the systems view
intranet search as a problem of conventional relevance search.
In the research community, ground designs, fundamental
approaches, and evaluation methodologies on intranet search have
been proposed.
Hawking et al [17] made ten suggestions on how to conduct high
quality intranet search. Fagin et al [12] made a comparison
between internet search and intranet search. Recently, Hawking
[16] conducted a survey on previous work and made an analysis
on the intranet search problem. Seven open problems on intranet
search were raised in their paper.
Chen et al [3] developed a system named `Cha-Cha', which can
organize intranet search results in a novel way such that the
underlying structure of the intranet is reflected. Fagin et al [12]
proposed a new ranking method for intranet search, which
combine various ranking heuristics. Mattox et al [25] and
Craswell et al [7] addressed the issue of expert finding on a
company intranet. They developed methods that can automatically
identify experts in an area using documents on the intranet.
Stenmark [30] proposed a method for analyzing and evaluating
intranet search tools.
2.2 Question Answering
Question Answering (QA) particularly that in TREC
(http://trec.nist.gov/) is an application in which users type
questions in natural language and the system returns short and
usually single answers to the questions.
When the answer is a personal name, a time expression, or a place
name, the QA task is called `Factoid QA'. Many QA systems have
been developed, [2, 4, 18, 20, 22, 27]. Factoid QA usually
consists of the following steps: question type identification,
question expansion, passage retrieval, answer ranking, and answer
creation.
TREC also has a task of `Definitional QA'. In the task, "what is
<term>" and "who is <person>" questions are answered in a
single combined text [1, 11, 15, 33, 34]. A typical system consists
of question type identification, document retrieval, key sentence
matching, kernel fact finding, kernel fact ranking, and answer
generation.
OUR APPROACH TO INTRANET SEARCH
Search is nothing but collecting information based on users'
information access requests. If we can correctly gather
information on the basis of users' requests, then the problem is
solved. Current intranet search is not designed along this
direction. Relevance search can help create a list of ranked
documents that serve only average needs well. The limitation of
this approach is clear. That is, it cannot help users to find
information of a specific type, e.g., definitions of a term. On the
other hand, Question Answering (QA) is an ideal form for
information access. When a user inputs a natural language
question or a query (a combination of keywords) as a description
of his search need, it is ideal to have the machine `understand' the
input and return only the necessary information based on the
request. However, there are still lots of research work to do before
putting QA into practical uses. In short term, we need consider
adopting a different approach.
One question arises here: can we take a hybrid approach?
Specifically, on one hand, we adopt the traditional approach for
search, and on the other hand, we realize some of the most
frequently asked types of search with QA. Finally, we integrate
them in a single system. For the QA part, we can employ
information extraction technologies to extract, fuse, and
summarize the results in advance. This is exactly the proposal we
make to intranet search.
Can we categorize users' search needs easily? We have found that
we can create a hierarchy of search needs for intranet search, as
will be explained in section 4.
On intranets, users are information workers and their motivations
for conducting search are business oriented. We think, therefore,
that our approach may be relatively easily realized on intranets
first. (There is no reason why we cannot apply the same approach
to the internet, however.)
To verify the correctness of the proposal, we have developed a
system and made it available internally at Microsoft. The system
called Information Desk is in operation on the intranet of
Microsoft and receives accesses from about 500 employees per
month.
At Information Desk, we try to solve the most important types of
search first - find term definitions, homepages of groups or topics,
experts on topics, and employees' personal information. We are
461
also trying to increase the number of search types, and integrate
them with the conventional relevance search. We will explain the
working of Information Desk in section 5.
ANALYSIS OF SEARCH NEEDS
In this section, we describe our analyses on intranet search needs
using search query logs and survey results.
4.1 Categorization of Search Needs
In order to understand the underlying needs of search queries, we
would need to ask the users about their search intentions.
Obviously, this is not feasible. We conducted an analysis by using
query log data. Here query log data means the records on queries
typed by users, and documents clicked by the users after sending
the queries.
Our work was inspirited by those of Rose and Levinson [28]. In
their work, they categorized the search needs of users on internet
by analyzing search query logs.
We tried to understand users' search needs on intranet by
identifying and organizing a manageable number of categories of
the needs. The categories encompass the majority of actual
requests users may have when conducting search on an intranet.
We used a sample of queries from the search engine of the
intranet of Microsoft. First, we brainstormed a number of
categories, based on our own experiences and previous work.
Then, we modified the categories, including adding, deleting, and
merging categories, by assigning queries to the categories.
Given a query, we used the following information to deduce the
underlying search need:
the query itself
the documents returned by the search engine
the documents clicked on by the user
For example, if a user typed a keyword of `.net' and clicked a
homepage of .net, then we judged that the user was looking for a
homepage of .net.
As we repeated the process, we gradually reached the conclusion
that search needs on intranet can be categorized as a hierarchical
structure shown in Figure 1. In fact, the top level of the hierarchy
resembles that in the taxonomy proposed by Rose and Levinson
for internet [28]. However, the second level differs. On intranet,
users' search needs are less diverse than those on internet, because
the users are information workers and their motivations of
conducting search are business oriented.
There is a special need called `tell me about' here. It is similar to
the traditional relevance search. Many search needs are by nature
difficult to be categorized, for example, "I want to find documents
related to both .net and SQL Server". We can put them into the
category.
We think that the search needs are not Microsoft specific; one can
image that similar needs exist in other companies as well.
Informational
When (time)
Where (place)
Why (reason)
What is (definition)
Who knows about (expert)
Who is (person)
How to (manual)
Tell me about (relevance)
Navigational
Person
Product
Technology
Services
Group
Transactional
Figure 1. Categories of search needs
4.2 Analysis on Search Needs by Query Log
We have randomly selected 200 unique queries and tried to assign
the queries to the categories of search needs described above.
Table 1 shows the distribution. We have also picked up the top
350 frequently submitted queries and assigned them to the
categories. Table 2 shows the distribution. (There is no result for
`why', `what is', and `who knows about', because it is nearly
impossible to guess users' search intensions by only looking at
query logs.)
For random queries, informational needs are dominating. For high
frequency queries, navigational needs are dominating. The most
important types for random queries are relevance search, personal
information search, and manual search. The most important types
for high frequency queries are home page search and relevance
search.
4.3 Analysis on Search Needs by Survey
We can use query log data to analyze users' search needs, as
described above. However, there are two shortcomings in the
approach. First, sometimes it is difficult to guess the search
intensions of users by only looking at query logs. This is
especially true for the categories of `why' and `what'. Usually it is
hard to distinguish them from `relevance search'. Second, query
log data cannot reveal users' potential search needs. For example,
many employees report that they have needs of searching for
experts on specific topics. However, it is difficult to find expert
searches from query log at a conventional search engine, because
users understand that such search is not supported and they do not
conduct the search.
To alleviate the negative effect, we have conducted another
analysis through a survey. Although a survey also has limitation
(i.e., it only asks people to answer pre-defined questions and thus
can be biased), it can help to understand the problem from a
different perspective.
462
Table 1. Distribution of search needs for random queries
Category of Search Needs
Percentage
When 0.02
Where 0.02
Why NA
What is
NA
Who knows about
NA
Who is
0.23
How to
0.105
Tell me about
0.46
Informational total
0.835
Groups 0.03
Persons
0.005
Products
0.02
Technologies
0.02
Services
0.06
Navigational total
0.135
Transactional 0.025
Other 0.005
Table 2. Distribution of search needs for high frequency queries
Category of Search Needs
Relative Prevalence
When 0.0057
Where 0.0143
Why NA
What is
NA
Who knows about
NA
Who is
0.0314
How to
0.0429
Tell me about
0.2143
Informational total
0.3086
Groups 0.0571
Persons
0.0057
Products
0.26
Technologies
0.0829
Services
0.2371
Navigational total
0.6428
Transactional 0.0086
Other 0.04
I have experiences of conducting search at Microsoft intranet to
look for the web sites (or homepages) of (multiple choice)
technologies
74 %
products
74 %
services
68 %
projects
68 %
groups
60 %
persons
42 %
none of the above
11 %
I have experiences of conducting search at Microsoft intranet in
which the needs can be translated into questions like? (multiple
choice)
`what is' - e.g., "what is blaster"
77 %
`how to' - "how to submit expense report"
54 %
`where' - e.g., "where is the company store"
51 %
`who knows about' - e.g., "who knows about data mining"
51 %
`who is' - e.g., "who is Rick Rashid"
45 %
`when' - e.g., "when is TechFest'05 "
42 %
`why' - e.g., "why do Windows NT device drivers contain
trusted code"
28 %
none of the above
14 %
I have experiences of conducting search at Microsoft intranet in
order to (multiple choice)
download a software, a document, or a picture. E.g., "getting
MSN logo"
71 %
make use of a service. E.g., "getting a serial number of
Windows"
53 %
none of the above
18 %
Figure 2. Survey results on search needs
In the survey, we have asked questions regarding to search needs
at enterprise. 35 Microsoft employees have taken part in the
survey. Figure 2 shows the questions and the corresponding
results.
We see from the answers that definition search, manual search,
expert finding, personal information search, and time schedule
search are requested by the users. Homepage finding on
technologies and products are important as well. Search for a
download site is also a common request.
463
Who is
Definition of Longhorn
Where is homepage of
Who knows about
Longhorn is the codename for the next release of the Windows operating system, planned for release in FY 2005.
Longhorn
will further Microsoft's long term vision for ...
http://url1
Longhorn is a platform that enables incredible user experiences that are unlike anything possible with OS releases to date .
This session describes our approach and philosophy that...
http://url2
Longhorn is the platform in which significant improvements in the overall manageability of the system by providing the
necessary infrastructure to enable standardized configuration/change management, structured eventing and monitoring, and
a unified software distribution mechanism will be made.
In order to achieve this management with each Longhorn...
http://url3
Longhorn is the evolution of the .NET Framework on the client and the biggest investment that Microsoft has made in the
Windows client development platform in years.
Longhorn is the platform for smart , connected...
http://url4
Longhorn is the platform for smart, connected applications , combining the best features of the Web, such as ease of
deployment and rich content with the power of the Win32 development platform, enabling developers to build a new breed of
applications that take real advantage of the connectivity , storage, and graphical capabilities of the modern personal
computer .
http://url5
What is
Longhorn
Go
What is
Who is
Where is homepage of
Who knows about
What is
Who is
Homepages of Office
Office
Go
What is
Who is
Where is homepage of
Who knows about
Who knows about
Office Portal Site
This is the internal site for Office
http://url1
Where is homepage of
Office Site (external)
Microsoft.com site offering information on the various Office products. Links include FAQs , downloads, support, and more.
http:/url2
Office
New Office Site
http://url3
Office
Office
http://url4
Where is homepage of
What is
Who is
People Associated with Data mining
Who knows about
Jamie MacLennan
DEVELOPMENT LEAD
US-SQL Data Warehouse
+1 (425) XXXXXXX XXXXXX
Associated documents(4):
is author of document entitled
Data Mining Tutorial
http ://url1
is author of document entitled
Solving Business Problems Using Data Mining
http:// url2
Jim Gray
DISTINGUISHED ENGINEER
US-WAT MSR San Francisco
+XXXXXXXXXXX
Associated documents(2):
Data Mining
Go
What is
Who is
Where is homepage of
Who knows about
is author of document entitled
Mainlining Data Mining
http ://url3
is author of document entitled
Data Mining the SDSS SkyServer Database
http:// url4
Where is homepage of
Who knows about
What is
Who is
Bill Gates
CHRMN & CHIEF SFTWR ARCHITECT
US-Executive-Chairman
+1 (425) XXXXXXX XXXXXX
Documents of Bill Gates(118)
My advice to students: Education counts
http://url1
Evento NET Reviewers Seattle 7/8 Novembro
http://url2
A Vision for Life Long Learning Year 2020
http://url3
Bill Gates answers most frequently asked questions .
http://url4
>>more
Top 10 terms appearing in documents of Bill Gates
Term 1 (984.4443)
Term 2 (816.4247)
Term 3 (595.0771)
Term 4 (578.5604)
Term 5 (565.7299)
Term 6 (435.5366)
Term 7 (412.4467)
Term 8 (385.446)
Term 9 (346.5993)
Term 10 (345.3285)
Bill Gates
Go
What is
Who is
Where is homepage of
Who knows about
Figure 3: Information Desk system
INFORMATION DESK
Currently Information Desk provides four types of search. The
four types are:
1. `what is' search of definitions and acronyms. Given a term,
it returns a list of definitions of the term. Given an acronym, it
returns a list of possible expansions of the acronym.
2. `who is' search of employees' personal information. Given
the name of a person, it returns his/her profile information,
authored documents and associated key terms.
3. `where is homepage of' search of homepages. Given the
name of a group, a product, or a technology, it returns a list of
its related home pages.
4. `who knows about' search of experts. Given a term on a
technology or a product, it returns a list of persons who might
be experts on the technology or the product.
Crawler &
Extractor
Web Server
Information Desk
MS Web
term
definition
acronym
what is
person
document
key term
who is
term
person
document
who knows about
term
homepage
Where is homepage of
Figure 4. Workflow of Information Desk
There are check boxes on the UI, and each represents one search
type. In search, users can designate search types by checking the
corresponding boxes and then submit queries. By default, all the
boxes are checked.
For example, when users type `longhorn' with the `what is' box
checked, they get a list of definitions of `Longhorn' (the first
snapshot in figure 3). Users can also search for homepages (team
web sites) related to `Office', using the `where is homepage'
feature (the second snapshot in figure 3). Users can search for
experts on, for example, `data mining' by asking `who knows
about data mining' (the third snapshot in figure 3). Users can also
get a list of documents that are automatically identified as being
authored by `Bill Gates', for example, with the `who is' feature
(the last snapshot in figure 3). The top ten key terms found in his
documents are also given.
Links to the original documents, from which the information has
been extracted, are also available on the search result UIs.
5.2 Technologies
5.2.1 Architecture
Information Desk makes use of information extraction
technologies to support the search by type feaatures. The
technologies include automatically extracting document metadata
and domain specific knowledge from a web site using information
extraction technologies. The domain specific knowledge includes
definition, acronym, and expert. The document metadata includes
title, author, key term, homepage. Documents are in the form of
Word, PowerPoint, or HTML. Information Desk stores all the
data in Microsoft SQL Server and provides search using web
464
$
9
!
" !
4
9
%
)
!
.$
,
!
!
"
T
2
T
(
T
T
T
2
T
T
U
T
T
3
T
,
M
N
!
4
2
-V% KV
%
L *)7+
K
L
-V%
$
!
K
L
!
:
!
K
L
>
>
7
)
!
' &&'
' <<1
%
*7 7' 77 7& F7 F9 ))+ 2
#
!
" !
!
!
-V%
*
)F+
$
M
N
KM
NL
M
N
M
N
>
52
"
A
"
2
B
KA"2BL6
B
B
C
$
!
!
!
$# "$
%
(
; '''
; '''
B
B
' F;&
' 7F;
!
D
$
&
K
B
B
L
!
2
B
P
%
*F)+
3
!
D
,
B
' <1&
' <==
8
4
@ #
;%%; @ #
J !
! A7
( )
8
4
@ #
;%%; @ #
J !
! A7
( )
2
8$
,
"
465
recall for title extraction from PowerPoint are 0.907 and 0.951
respectively.
Metadata extraction has been intensively studied. For instance,
Han et al [14] proposed a method for metadata extraction from
research papers. They considered the problem as that of
classification based on SVM. They mainly used linguistic
information as features. To the best of our knowledge, no
previous work has been done on metadata extraction from general
documents. We report our title extraction work in details in [19].
The feature of `who is' can help find documents authored by a
person, but existing in different team web sites. Information
extraction (specifically metadata extraction) makes the aggregation
of information possible.
5.2.4 `Who knows about'
The basic idea for the feature is that if a person has authored many
documents on an issue (term), then it is very likely that he/she is an
expert on the issue, or if the person's name co-occurs in many times
with the issue, then it is likely that he/she is an expert on the issue.
As described above, we can extract titles, authors, and key terms
from all the documents. In this way, we know how many times each
person is associated with each topic in the extracted titles and in the
extracted key terms. We also go through all the documents and see
how many times each person's name co-occurs with each topic in
text segments within a pre-determined window size.
In search, we use the three types of information: topic in title, topic
in key term, and topic in text segment to rank persons, five persons
for each type. We rank persons with a heuristic method and return
the list of ranked persons. A person who has several documents with
titles containing the topic will be ranked higher than a person whose
name co-occurs with the topic in many documents.
It appears that the results of the feature largely depend on the size of
document collection we crawl. Users' feedbacks on the results show
that sometimes the results are very accurate, however, sometimes
they are not (due to the lack of information).
Craswell et al. developed a system called `P@NOPTIC', which can
automatically find experts using documents on an intranet [7]. The
system took documents as plain texts and did not utilize metadata of
documents as we do at Information Desk.
5.2.5 `Where is homepage of'
We identify homepages (team web sites) using several rules. Most of
the homepages at the intranet of Microsoft are created by
SharePoint, a product of Microsoft. From SharePoint, we can obtain
a property of each page called `ContentClass'. It tells exactly
whether a web page corresponds to a homepage or a team site. So
we know it is a homepage (obviously, this does not apply in
general). Next we use several patterns to pull out titles from the
homepages. The precision of home page identification is nearly
100%.
In search, we rank the discovered home pages related to a query
term using the URL lengths of the home pages. A home page with a
shorter URL will be ranked higher.
TREC has a task called `home/named page finding' [8, 9], which is
to find home pages talking about a topic. Many methods have been
developed for pursuing the task [5, 6, 26, 29]. Since we can identify
homepages by using special properties on our domain, we do not
consider employing a similar method.
EVALUATION
Usually it is hard to conduct evaluation on a practical system. We
evaluated the usefulness of Information Desk by conducting a
survey and by recording system logs.
We have found from analysis results that the `what is' and `where is
homepage of' features are very useful. The `who is' feature works
well, but the `who knows about' feature still needs improvements.
6.1 Survey Result Analysis
The survey described in section 4.3 also includes feedbacks on
Information Desk.
Figure 6 shows a question on the usefulness of the features and a
summary on the answers. We see that the features `where is
homepage of' and `what is' are regarded useful by the responders in
the survey.
Figure 7 shows a question on new features and a summary on the
answers. We see that the users want to use the features of `how to',
`when', `where' and `why' in the future. This also justifies the
correctness of our claim on intranet search made in section 4.
Figure 8 shows a question on purposes of use and a digest on the
results. About 50% of the responders really want to use Information
Desk to search for information.
There is also an open-ended question asking people to make
comments freely. Figure 9 gives some typical answers from the
responders. The first and second answers are very positive, while the
third and fourth point out the necessity of increasing the coverage of
the system.
Which feature of Information Desk has helped you in finding
information?
`where is homepage of' - finding homepages
54 %
`what is' - finding definitions/acronyms
25 %
`who is' - finding information about people
18 %
`who knows about' - finding experts
3 %
Figure 6. Users' evaluation on Information Desk
What kind of new feature do you want to use at Information
Desk? (multiple choice)
`how to' - e.g., "how to activate Windows"
57 %
`when' - e.g., "when is Yukon RTM"
57 %
`where' - e.g., "where can I find an ATM"
39 %
`why' - e.g., "why doesn't my printer work"
28 %
others
9 %
Figure 7. New features expected by users
466
I visited Information Desk today to
conduct testing on Information Desk
54 %
search for information related to my work
46 %
Figure 8. Motivation of using Information Desk
Please provide any additional comments, thanks!
This is a terrific tool! Including `how to' and `when'
capabilities will put this in the `can't live without it'
category.
Extremely successful searching so far! Very nice product
with great potential.
I would like to see more `Microsoftese' definitions. There is
a lot of cultural/tribal knowledge here that is not explained
anywhere.
Typing in my team our website doesn't come up in the
results, is there any way we can provide content for the
search tool e.g., out group sharepoint URL?
...
Figure 9. Typical user comments to Information Desk
6.2 System Log Analysis
We have made log during the running of Information Desk. The
log includes user IP addresses, queries and clicked documents
(recall that links to the original documents, from which
information has been extraction, are given in search). The log data
was collected from 1,303 unique users during the period from
November 26
th
, 2004 to February 22
nd
, 2005. The users were
Microsoft employees.
In the log, there are 9,076 query submission records. The records
include 4,384 unique query terms. About 40% of the queries are
related to the `what is' feature, 29% related to `where is homepage
of', 30% related to `who knows about' and 22% related to `who
is'. A query can be related to more than one feature.
In the log, there are 2,316 clicks on documents after query
submissions. The numbers of clicks for the `what is', `where is
homepage of', `who knows about', and `who is' features are 694,
1041, 200 and 372, respectively. Note that for `what is', `where is
home page of', and `who knows about' we conduct ranking on
retrieved information. The top ranked results are considered to be
the best. If a user has clicked a top ranked document, then it
means that he is interested in the document, and thus it is very
likely he has found the information he looks for. Thus a system
which has higher average rank of clicks is better than the other
that does not. We used average rank of clicked documents to
evaluate the performances of the features. The average ranks of
clicks for `what is', `where is homepage of' and `who knows
about' are 2.4, 1.4 and 4.7 respectively. The results indicate that
for the first two features, users usually can find information they
look for on the top three answers. Thus it seems safe to say that
the system have achieved practically acceptable performances for
the two features. As for `who is', ranking of a person's documents
does not seem to be necessary and the performance should be
evaluated in a different way. (For example, precision and recall of
metadata extraction as we have already reported in section 5).
CONCLUSION
In this paper, we have investigated the problem of intranet search
using information extraction.
Through an analysis of survey results and an analysis of
search log data, we have found that search needs on intranet
can be categorized into a hierarchy.
Based on the finding, we propose a new approach to intranet
search in which we conduct search for each special type of
information.
We have developed a system called `Information Desk',
based on the idea. In Information Desk, we provide search on
four types of information - finding term definitions,
homepages of groups or topics, employees' personal
information and experts on topics. Information Desk has
been deployed to the intranet of Microsoft and has received
accesses from about 500 employees per month. Feedbacks
from users show that the proposed approach is effective and
the system can really help employees to find information.
For each type of search, information extraction technologies
have been used to extract, fuse, and summarize information
in advance. High performance component technologies for
the mining have been developed.
As future work, we plan to increase the number of search types
and combine them with conventional relevance search.
ACKNOWLEDGMENTS
We thank Jin Jiang, Ming Zhou, Avi Shmueli, Kyle Peltonen,
Drew DeBruyne, Lauri Ellis, Mark Swenson, and Mark Davies for
their supports to the project.
REFERENCES
[1] S. Blair-Goldensohn, K.R. McKeown, A.H. Schlaikjer. A
Hybrid Approach for QA Track Definitional Questions. In
Proc. of Twelfth Annual Text Retrieval Conference (TREC12
), NIST, Nov., 2003.
[2] E. Brill, S. Dumais, and M. Banko, An Analysis of the
AskMSR Question-Answering System,
EMNLP 2002
[3] M. Chen, A. Hearst, A. Marti, J. Hong, and J. Lin, Cha-Cha:
A System for Organizing Intranet Results. Proceedings of the
2nd USENIX Symposium on Internet Technologies and
Systems. Boulder, CO. Oct. 1999.
[4] C. L. A. Clarke, G. V. Cormack, T. R. Lynam, C. M. Li, and
G. L. McLearn, Web Reinforced Question Answering
(MultiText Experiments for TREC 2001). TREC 2001
[5] N. Craswell, D. Hawking, and S.E. Robertson. Effective site
finding using link anchor information. In Proc. of the 24th
annual international ACM SIGIR conference on research
and development in information retrieval, pages 250--257,
2001.
[6] N. Craswell, D. Hawking, and T. Upstill. TREC12 Web and
Interactive Tracks at CSIRO. In TREC12 Proceedings, 2004.
[7] N. Craswell, D. Hawking, A. M. Vercoustre, and P. Wilkins.
P@noptic expert: Searching for experts not just for
documents. Poster Proceedings of AusWeb'01,
467
2001b./urlausweb.scu.edu.au/aw01/papers/edited/vercoustre/
paper.htm.
[8] N. Craswell, D. Hawking, R. Wilkinson, and M. Wu.
Overview of the TREC-2003 Web Track. In NIST Special
Publication: 500-255, The Twelfth Text REtrieval
Conference (TREC 2003), Gaithersburg, MD, 2003.
[9] N. Craswell, D. Hawking, R. Wilkinson, and M. Wu. Task
Descriptions: Web Track 2003. In TREC12 Proceedings,
2004.
[10] H. Cui, M-Y. Kan, and T-S. Chua. Unsupervised Learning of
Soft Patterns for Definitional Question Answering,
Proceedings of the Thirteenth World Wide Web conference
(WWW 2004), New York, May 17-22, 2004.
[11] A. Echihabi, U.Hermjakob, E. Hovy, D. Marcu, E. Melz, D.
Ravichandran. Multiple-Engine Question Answering in
TextMap. In Proc. of Twelfth Annual Text Retrieval
Conference (TREC-12), NIST, Nov., 2003.
[12] R. Fagin, R. Kumar, K. S. McCurley, J. Novak, D.
Sivakumar, J. A. Tomlin, and D. P. Williamson. Searching
the workplace web. Proc. 12th World Wide Web Conference,
Budapest, 2003.
[13] S. Feldman and C. Sherman. The high cost of not finding
information. Technical Report #29127, IDC, April 2003.
[14] H. Han, C. L. Giles, E. Manavoglu, H. Zha, Z. Zhang, and E.
A. Fox. Automatic Document Metadata Extraction using
Support Vector Machines. In Proceedings of the third
ACM/IEEE-CS joint conference on Digital libraries, 2003
[15] S. Harabagiu, D. Moldovan, C. Clark, M. Bowden, J.
Williams, J. Bensley. Answer Mining by Combining
Extraction Techniques with Abductive Reasoning. In Proc.
of Twelfth Annual Text Retrieval Conference (TREC-12),
NIST, Nov., 2003.
[16] D. Hawking. Challenges in Intranet search. Proceedings of
the fifteenth conference on Australasian database. Dunedin,
New Zealand, 2004.
[17] D. Hawking, N. Craswell, F. Crimmins, and T. Upstill.
Intranet search: What works and what doesn't. Proceedings
of the Infonortics Search Engines Meeting, San Francisco,
April 2002.
[18] E. Hovy, L. Gerber, U. Hermjakob, M. Junk, and C. Y. Lin.
Question Answering in Webclopedia. TREC 2000
[19] Y. Hu, H. Li, Y. Cao, D. Meyerzon, and Q. Zheng.
Automatic Extraction of Titles from General Documents
using Machine Learning. To appear at Proc. of Joint
Conference on Digital Libraries (JCDL), 2005. Denver,
Colorado, USA. 2005.
[20] A. Ittycheriah and S. Roukos, IBM's Statistical Question
Answering System-TREC 11. TREC 2002
[21] J. Klavans and S. Muresan. DEFINDER: Rule-Based
Methods for the Extraction of Medical Terminology and
their Associated Definitions from On-line Text. In
Proceedings of AMIA Symposium 2000.
[22] C. C. T. Kwok, O. Etzioni, and D. S. Weld, Scaling question
answering to the Web. WWW-2001: 150-161
[23] Y. Li, H Zaragoza, R Herbrich, J Shawe-Taylor, and J. S.
Kandola. The Perceptron Algorithm with Uneven Margins.
in Proceedings of ICML'02.
[24] B. Liu, C. W. Chin, and H. T. Ng. Mining Topic-Specific
Concepts and Definitions on the Web. In Proceedings of the
twelfth international World Wide Web conference (WWW-2003
), 20-24 May 2003, Budapest, HUNGARY.
[25] D. Mattox, M. Maybury and D. Morey. Enterprise Expert
and Knowledge Discovery. Proceedings of the HCI
International '99 (the 8th International Conference on
Human-Computer Interaction) on Human-Computer
Interaction: Communication, Cooperation, and Application
Design-Volume 2 - Volume 2. 1999.
[26] P. Ogilvie and J. Callan. Combining Structural Information
and the Use of Priors in Mixed Named-Page and Homepage
Finding. In TREC12 Proceedings, 2004.
[27] D. R. Radev, W. Fan, H. Qi, H. Wu, and A. Grewal.
Probabilistic question answering on the web. WWW 2002:
408-419
[28] D. E. Rose and D. Levinson. Understanding user goals in
web search. Proceedings of the 13th international World
Wide Web conference on Alternate track papers & posters,
2004 New York, USA.
[29] J. Savoy, Y. Rasolofo, and L. Perret, L. Report on the TREC-2003
Experiment: Genomic and Web Searches. In TREC12
Proceedings, 2004.
[30] D. Stenmark. A Methodology for Intranet Search Engine
Evaluations. Proceedings of IRIS22, Department of CS/IS,
University of Jyvskyl, Finland, August 1999.
[31] V. N. Vapnik. The Nature of Statistical Learning Theory.
Springer, 1995.
[32] J. Xu, Y. Cao, H. Li, and M. Zhao. Ranking Definitions with
Supervised Learning Methods. In Proc. of 14
th
International
World Wide Web Conference (WWW05), Industrial and
Practical Experience Track, Chiba, Japan, pp.811-819, 2005.
[33] J. Xu, A. Licuanan, R. Weischedel. TREC 2003 QA at BBN:
Answering Definitional Questions. In Proc. of 12
th
Annual
Text Retrieval Conference (TREC-12), NIST, Nov., 2003.
[34] H. Yang, H. Cui, M. Maslennikov, L. Qiu, M-Y. Kan, and TS
. Chua, QUALIFIER in TREC-12 QA Main Task. TREC
2003: 480-488
[35] Intellectual capital management products. Verity,
http://www.verity.com/
[36] IDOL server. Autonomy,
http://www.autonomy.com/content/home/
[37] Fast data search. Fast Search & Transfer,
http://www.fastsearch.com/
[38] Atomz intranet search. Atomz, http://www.atomz.com/
[39] Google Search Appliance. Google,
http://www.google.com/enterprise/
468 | Search Needs;metadata extraction;features;architecture;Experimentation;definition search;INFORMATION DESK;information extraction;expert finding;Algorithms;intranet search;Human Factors;information retrieval;component technologies;Intranet search;types of information |
140 | Modeling Node Compromise Spread in Wireless Sensor Networks Using Epidemic Theory | Motivated by recent surfacing viruses that can spread over the air interfaces, in this paper, we investigate the potential disastrous threat of node compromise spreading in wireless sensor networks. Originating from a single infected node, we assume such a compromise can propagate to other sensor nodes via communication and preestablished mutual trust. We focus on the possible epidemic breakout of such propagations where the whole network may fall victim to the attack. Based on epidemic theory, we model and analyze this spreading process and identify key factors determining potential outbreaks. In particular, we perform our study on random graphs precisely constructed according to the parameters of the network, such as distance, key sharing constrained communication and node recovery, thereby reflecting the true characteristics therein. The analytical results provide deep insights in designing potential defense strategies against this threat. Furthermore , through extensive simulations, we validate our model and perform investigations on the system dynamics. Index Terms-- Sensor Networks, Epidemiology, Random Key Predistribution, Random Graph. | Introduction
As wireless sensor networks are unfolding their vast
potential in a plethora of application environments [1],
security still remains one of the most critical challenges
yet to be fully addressed. In particular, a vital problem
in the highly distributed and resource constrained environment
is node compromise, where a sensor node can
be completely captured and manipulated by the adversary.
While extensive work has focused on designing schemes
that can either defend and delay node capture or timely
identify and revoke compromised nodes themselves [5],
little attention has been paid to the node compromise
process itself. Inspired by recently emerged viruses that
can spread over air interfaces, we identify in this paper
the threat of epidemic spreading of node compromises in
large scale wireless sensor networks and present a model
that captures the unique characteristic of wireless sensor
networks in conjunction with pairwise key schemes. In
particular, we identify the key factors determining the
potential epidemic outbreaks that in turn can be employed
to devise corresponding defense strategies.
A. Motivation
Due to its scarce resources and hence low defense capabilities
, node compromises can be expected to be common
phenomena for wireless sensor networks in unattended
and hostile environments. While extensive research efforts,
including those from ourselves [15], have been engineered
toward designing resilient network security mechanisms
[12], [13], the compromise itself and in particular the propagation
of node compromise (possible epidemics) have
attracted little attention.
While node compromise, thanks to physical capture and
succeeding analysis, is naturally constrained by the adver-sary's
capability, software originated compromises can be
much more damaging. Specifically, the recently surfaced
virus Cabir
1
that can spread over the air interface has
unveiled a disastrous threat for wireless sensor networks.
Inescapably, viruses targeting wireless sensor networks
will emerge. Consequently, node compromise by way of
virus spreading (over the air interface) can effortlessly
devastate the entire network in a short period of time. With
recent advancements on sensor design empowering nodes
such as MICA2 motes with over-the-air programmability,
the network becomes vulnerable to the above described
attack. Even worse, the inherent dense, large scale nature
of sensor networks undoubtedly further facilitates the virus
propagation.
While virus spreading over the internet has been widely
studied, and notably by means of epidemic theory [2], [3],
the distance and pairwise key restricted communication
pattern in wireless sensor networks uniquely distinguish
the phenomena from those on the Internet.
B. Our Contribution
In this paper, we investigate the spreading process of
node compromise in large scale wireless sensor networks.
Starting from a single point of failure, we assume that the
adversary can effectively compromise neighboring nodes
through wireless communication and thus can threat the
whole network without engaging in full scale physical
attacks. In particular, due to security schemes employed by
the sensor networks, we assume that communication can
only be performed when neighboring nodes can establish
mutual trust by authenticating a common key. Therefore,
node compromise is not only determined by the deployment
of sensor nodes which in turn affects node density,
but also determined by the pairwise key scheme employed
therein. By incorporating these factors of the networks,
we propose an epidemiological model to investigate the
probability of a breakout (compromise of the whole network
) and if not, the sizes of the affected components
(compromised clusters of nodes). Furthermore, we analyze
the effect of node recovery in an active infection scenario
and obtain critical values for these parameters that result
in an outbreak. Through extensive simulations, we show
that our analytical results can closely capture the effects
in a wide range of network setups.
The remainder of the paper is organized as follows. In
Section II we present the preliminaries, including the threat
model, random key pre-distribution, and epidemic theory.
In Section III, we study the compromise propagation
without node recovery and with node recovery, and detail
our analytical results. We perform experimental study in
Section IV. Related work is presented in Section V and
we conclude in Section VI.
Preliminaries
In this section, we present our threat model and briefly
overview pairwise key distribution in wireless sensor networks
and epidemic theory.
A. Threat model
We assume that a compromised node, by directly
communicating with a susceptible node, can spread the
infection and conduce to the compromise of the susceptible
node. Communication among sensor nodes is not only
constrained by their distances, but also shall be secured
and thus determined by the probability of pairwise key
sharing. Therefore, the spreading of node compromise is
dependent on the network deployment strategy and the
pairwise key scheme employed therein. We assume that
the "seed" compromise node could be originated by an
adversary through physical capture and analysis of that
node or by other similar means.
The spread of node compromise in a wireless sensor
network, particularly thanks to its dense nature, can lead
to an epidemic effect where the whole network will get
infected. We consider this epidemic effect as the key threat
to the network and hence the investigation target of this
paper.
B. Pairwise Key Pre-distribution
As the pairwise key scheme affects the communication
and hence the propagation of the node compromise, we
provide below, a brief overview of the key distribution
schemes in wireless sensor networks.
Due to the severe resource constraint of wireless sensor
networks and limited networking bandwidth, proposed
pairwise key schemes have commonly adopted the predistribution
approach instead of online key management
schemes with prohibitive resource consumption. The concept
of pre-distribution was originated from [11], where
the authors propose to assign a number of keys, termed key
ring randomly drawn from a key pool. If two neighboring
nodes share a common key on their key rings, a shared
pairwise key exists and a secure communication can be
established. Pre-distribution schemes that rely on bivariate
polynomials is discussed in [13]. In this scheme, each
sensor node is pre-distributed a set of polynomials. Two
sensor nodes with the same polynomials can respectively
derive the same key.
Regardless of the specific key distribution scheme, a
common parameter capturing the performance is the probability
that two neighbors can directly establish a secure
communication. We denote this probability by
q. As it shall
be revealed later,
q plays an important role in the spreading
of node compromise, because direct communication, as
explained in the threat model, can result in propagation of
malicious code.
C. Node Recovery
In the event that a node is compromised, its secrets will
be revealed to the attacker. The network may attempt to
recover the particular node. Recovery might be realized
in several possible ways. For example, the keys of the
nodes might be revoked and the node may be given a
fresh set of secret keys. In this context, key revocation,
which refers to the task of securely removing keys that
are known to be compromised, has been investigated as
part of the key management schemes, for example in
[5]. Moreover, recovery can also be achieved by simply
removing the compromised node from the network, for
example by announcing a blacklist, or simply reload the
node's programs. More sophisticated methods may include
immunizing a node with an appropriate antivirus patch that
might render the node immune from the same virus attack.
Regardless, in our analysis, we will study virus spreading
under the two cases respectively depending on whether
a node can be recovered or not.
D. Epidemic Theory
Originally, epidemic theory concerns about contagious
diseases spreading in the human society. The key feature
of epidemiology [2], [7] is the measurement of infection
outcomes in relation to a population at risk. The population
at risk basically comprises of the set of people who
possess a susceptibility factor with respect to the infection.
This factor is dependent on several parameters including
exposure, spreading rate, previous frequency of occurrence
etc., which define the potential of the disease causing
the infection. Example models characterizing the infection
spreading process include the Susceptible Infected Susceptible
(SIS) Model, Susceptible Infected Recovered (SIR)
Model etc. In the former, a susceptible individual acquires
infection and then after an infectious period, (i.e., the time
the infection persists), the individual becomes susceptible
again. On the other hand, in the latter, the individual
recovers and becomes immune to further infections.
Of particular interest is the phase transition of the
spreading process that is dependent on an epidemic threshold
: if the epidemic parameter is above the threshold, the
infection will spread out and become persistent; on the
contrary, if the parameter is below the threshold, the virus
will die out.
Epidemic theory indeed has been borrowed to the
networking field to investigate virus spreading. In this
paper, we will mainly rely on a random graph model to
characterize the unique connectivity of the sensor network
and perform the epidemic study [8], [10].
III Modelling and Analysis of Compromise Propagation
In this section, we analyze the propagation of node
compromise originating from a single node that has been
affected. Our focus is to study the outbreak point of the
epidemic effect where the whole network will fall victim
to the compromise procedure.
Our key method is to characterize the sensor network
, including its key distribution, by mathematically
formulating it as a random graph whose key parameters
are precisely determined by those of the sensor network.
Therefore, the investigation of epidemic phenomena can
be performed on the random graph instead. Following
this approach, we observe the epidemic process under two
scenarios: without node recovery and with node recovery,
depending on whether infected nodes will be recovered by
external measures like key revocation, immunization, etc.
A. Network Model as Random Graph
Assume that sensor nodes are uniformly deployed in
a disc area with radius
R. Let =
N
R
2
denote the node
density of the network where
N is the total number of
the nodes. For a sensor node with communication range
r,
the probability that
l nodes are within its communication
range is given by
p(l) = nl p
l
(1 - p)
n-l
(1)
where
p is defined by
p = r
2
R
2
= r
2
N .
(2)
Thus
p is the probability of a link existing at the physical
level, i.e., whether the two nodes fall within their respective
communication ranges.
We further assume that the probability that two neighboring
nodes sharing at least one key in the random predistribution
pairwise key is
q. Notice that q is determined
by the specific pairwise key scheme employed. For a
particular node having
l neighboring nodes, the probability
that there are
k nodes, k l, sharing at least one key with
it is given by
p(k|l) = lk q
k
(1 - q)
l-k
(3)
Therefore, the probability of having
k neighboring nodes
sharing at least one key is
p(k) =
l=k
p(l)p(k|l)
(4)
=
l=k
n
l p
l
(1 - p)
n-l
l
k q
k
(1 - q)
l-k
(5)
Thus, based on both physical proximity and the probability
of key sharing between neighbors, we get a degree
distribution
p(k). Notice that this degree distribution can
be employed to generate a random graph
G. Since G possesses
the same property in terms of secure communication
pattern as the sensor network of concern, we will next
perform the analysis on
G instead.
B. Compromise Spread Without Node Recovery
Given the random graph construction, we now analyze
the case of compromise spread when no node recovery
is performed. In other words, a compromised sensor node
will remain infectious indefinitely.
Let
G
0
(x) be the generating function of the degree
distribution of a randomly chosen vertex in
G and is
defined by
G
0
(x) =
k=0
p(k)x
k
(6)
Moreover, with
G
1
(x) given by
G
1
(x) =
1
G
0
(1)G
0
(x)
(7)
and with
denoting the infection probability of a node
being infected by communicating with a compromised
node, then following the analysis presented in [8], the
average size of the outbreak is derived as
s = 1 + G
0
(1)
1 - G
1
(1).
(8)
Infection probability
essentially captures the spreading
capability of the virus that could compromise the network:
the larger it is, the stronger the virus is. We assume that
its value can be obtained by means of measurement or
analysis.
Given the above result, we can see that the outbreak
point for the network is
= 1/G
1
(1) which marks
the onset of an epidemic. For
> 1/G
1
(1) we have
an epidemic in the form of a giant component in the
random network and the size
S of the epidemic, where
S denotes the expected fraction of the network that will
be compromised if an outbreak happens, is given by
S = 1 - G
0
(u).
Here
u is the root of the self-consistency relation
u = G
1
(u).
Intuitively, the above conclusion reveals that if
1/G
1
(1), the component of compromised nodes is finite
3
Proceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM'06)
0-7695-2593-8/06 $20.00 2006
IEEE
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
10
20
30
40
50
60
70
80
90
100
Infection Probability(
)
Size of Cluster Compromised
q = 0.01
q = 0.02
q = 0.04
q = 0.1
q = Key Sharing Prob
(a) Non-epidemic cluster size vs. infection probability (
)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Infection Probability (
)
Fraction of Network Compromised
q = 0.01
q = 0.02
q = 0.04
q = 0.1
q = Key Sharing Prob
(b) Epidemic size vs. infection probability (
)
Fig. 1. Size of compromised node clusters:
(a)
depicts the average size of infected clusters when there
is no epidemic and (b) shows the epidemic size as the
fraction of the entire network. The point where non-zero
value appears indicates the transition from non-epidemic
to epidemic
in size regardless of the size of the network and each
node's probability of being compromised is zero for large
networks. On the contrary, if
> 1/G
1
(1), there always
exists a finite probability for a node to be compromised.
Fig. 1 depicts this effect for a network with
N = 1000
nodes with different key sharing probabilities
q. The underlying
physical topology, determined by the communication
range and node density, has an average edge probability
of
p = 0.25. Given the physical deployment, we vary the
probability of direct pairwise key sharing (
q) and study
the point of outbreak. As we can see in Fig. 1, while
undoubtedly increasing
q can facilitate communication in
the network, the network also becomes more vulnerable
to virus spreading. Specifically, when
q = 0.01, network
wide breakout is only possible when a compromised node
has an infection probability (
) larger than 0.4 to infect a
neighbor. We note that in this case, we have an average
node degree of
2.5. On the contrary, this probability only
needs to be around
0.05 when q = 0.1 which subsequently
makes the node degree
25. Fig. 1(b) illustrates the fraction
of the network that is ultimately infected as the infection
probability is increased beyond the critical point of the
onset of outbreak. For instance, we observe that when
q = 0.1, the whole network is compromised with a value
of less than
0.2. On the contrary, with q = 0.01, 80% of
the network could be compromised with only a high value
of
= 0.8.
In summary, Fig. 1 clearly indicates the tradeoff between
key sharing probability among sensor nodes and the
vulnerability of the network to compromise.
C. Compromise Spread With Node Recovery
In this case, we assume that the network has the
capability to recover some of the compromised nodes
by either immunization or removal from the network. To
capture this recovery effect, we assume that an infected
node recovers or is removed from the network after an
average duration of infectivity
. In other words, a node in
the sensor network remains infective for an average period
after which it is immunized. During this infective period,
the node transmits the epidemic to its neighbors with the
infection rate
, denoting the probability of infection per
unit time. Evidently, the parameter
is critical to the
analysis as it measures how soon a compromised node
recovers. Naturally, we will perform our analysis following
the SIR model in epidemic theory [10], [8].
First, consider a pair of adjacent nodes where one
is infected and the other is susceptible. If
T denotes
the compromise transmission probability, given the above
definitions for
and , we can say that the probability
that the disease will not be transmitted from the infected
to the susceptible is given by
1 - T = lim
t0
(1 - t)
/t
= e
.
(9)
Subsequently, we have the transmission probability
T = 1 - e
.
In other words, the compromise propagation can be considered
as a Poisson process, with average
. The outcome
of this process is the same as bond percolation and
T is
basically analogous to the bond occupation probability on
the graph representing the key sharing network. Thus, the
outbreak size would be precisely the size of the cluster
of vertices that can be reached from the initial vertex
(infected node) by traversing only occupied edges which
are occupied with probability
T . Notice that T explicitly
captures node recovery in terms of the parameter
.
Replacing
with T in Equation 8, and following similar
steps, we get the size of the average cluster as
s = 1 + T G
0
(1)
1 - T G
1
(1).
(10)
and the epidemic size is obtained by
S = 1 - G
0
(u; T ).
(11)
where
u is obtained by
u = 1 - G
1
(u; T ),
(12)
4
Proceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM'06)
0-7695-2593-8/06 $20.00 2006
IEEE
0
50
100
150
0
50
100
150
Infectivity Duration
Size of Cluster Compromised
= 0.01
= 0.02
= 0.04
= 0.2
= Infection rate
(a) Non-epidemic cluster size vs. infectivity duration
0
50
100
150
200
250
300
350
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Infectivity Duration
Fraction of Network Compromised
= 0.01
= 0.02
= 0.04
= 0.08
= 0.2
= infection rate
(b) Epidemic size vs. infectivity duration
Fig. 2. Size of compromised node clusters:
(a)
depicts the average size of infected clusters when there
is no epidemic and (b) shows the epidemic size as the
fraction of the entire network. The point where non-zero
value appears indicates the transition from non-epidemic
to epidemic
and
G
0
(u; T ) and G
1
(u; T ) are given respectively by
G
0
(u; T ) = G
0
(1 + (u - 1)T ),
(13)
and
G
1
(u; T ) = G
1
(1 + (u - 1)T ).
(14)
Fig. 2 summarizes this effect, depicting the epidemic
outbreak against the average recovery time
for the
respective infection rates
. The plots are for a sensor
network with typical average degree of
10. In Fig. 2(a),
we can identify the average duration that an infected
node is allowed to remain infective before an epidemic
outbreak occurs. We notice that, when the infection rate
is
0.01, infected nodes have to be recovered/removed on
the average in less than 100 time units in order to prevent
an epidemic. As expected, this time is much lower when
the infection rate is
0.2. Fig. 2(b) depicts the epidemic
outbreak point for different infection rates
in terms of
the average duration of infectivity of a node.
We remark that both the analytical and experimental
results have significant implication for security scheme
design in terms of revoking/immunizing compromised
nodes in wireless sensor networks: it dictates the speed at
which the network must react in order to contain/prevent
the effect of network wide epidemic.
Simulation
We employ a discrete event-driven simulation to accu-rately
simulate the propagation of the infection spreading
process. In this section, we first outline our discrete-event
driven simulation model for the gradual progress of the
spread of node compromise. Then we use this model to
capture the time dynamics of the spread of the compromise
in the whole population.
A. Simulation Setup
In our simulation, we assume the number of sensor
nodes in the network to be 1000. The sensor network
is produced by uniformly distributing the sensors in a
12001200 unit
2
area. The communication range of each
node is assumed to be 100 units. Our goal is to make
the physical network fairly connected with an average
node degree of around 20 to 25. We use the key sharing
probability on top of this network to further reduce the
average node degree of the final key sharing network to
typical values of 3 and 10.
We employ the random key pre-distribution scheme
described in [11] to establish the pairwise key among
sensor nodes. By tuning the parameters of the scheme,
we can achieve any specific values for the probability of
any two neighbors to share at least one key.
Our simulation works in two phases. In the first phase,
we form the network where each node identifies its set of
neighbors and entries are made into a neighbor table. The
average degree of the key sharing network is controlled by
changing the value of the key sharing probability between
neighbors. The entry for each node in the neighborhood
table can indicate whether a node is susceptible, infected or
recovered. We use typical values obtained for the average
node degree of the network, namely, 3 and 10.
In the second phase, we simulate actual virus propagation
. Initially, at
t = 0, the number of infected nodes,
denoted by
I(0) is set to be 1. At any time point t,
the population is divided into the group of susceptible
nodes,
S(t), and the group of infected nodes, I(t). In
the situation where we have nodes that are immunized
and thus recovered, we denote that this set of recovered
nodes by
R(t). The sub-population dynamics is obtained
by observing the population counts after fixed simulation
intervals of 1 time unit. We assume that the time it takes
for an infected node to infect its susceptible neighbor is
negative exponentially distributed with a mean of 1 unit
time.
There are two simulation scenarios corresponding to our
analysis.
B. Simulation Results and Discussion
1) Simulation Results for No Recovery Case: The simulation
results for the case without recovery are shown
5
Proceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM'06)
0-7695-2593-8/06 $20.00 2006
IEEE
0
100
200
300
400
500
600
700
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time(t)
Fraction of compromised nodes
Growth of infected nodes with time
= 0.05
= 0.1
= 0.2
= 0.3
N = 1000
z = 5
(a) Average node degree = 5
0
100
200
300
400
500
600
700
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time(t)
Fraction of compromised nodes
Growth of infected nodes with time
= 0.05
= 0.1
= 0.2
= 0.3
N = 1000
Avg Degree z = 10
= Infection Prob
(b) Average node degree = 10
Fig. 3. System dynamics without recovery
in Fig. 3. We vary the value of the infection probability
under different network connectivities and study the
time dynamics of the infected population. We notice, as
expected, that an increase in the average node degree
from 5 to 10 has an impact on the rate of compromise
of the network. For instance, the curve with the lowest
value(0.05) has compromised the entire network by
simulation time 700 when the average node degree is 10.
However, with the node degree at 5, a
value of 0.05
could compromise upto 70% of the network by that same
simulation time. Thus, we find that in the no-recovery case
the two key parameters affecting the network compromise
rate are infection probability
and the average node
degree.
2) Simulation Results for Recovery Case: Fig. 4 and 5
show the simulation results for the three sub-populations
(infected, immunized, and susceptible) in the situation
where nodes do recover.
In Fig. 4 we see the effects of the infectivity duration
and infection rate
on the dynamics of the epidemic. In
Fig. 4(c), the highest point is reached very fast because of
the high value of
. Thereafter, its recovery also takes less
time. However, in Fig. 4(a),
is smaller but
0
is higher
(i.e., 30), the infection rises slowly and also falls slowly
because of the high recovery time.
In comparison, Fig. 5 has better connectivity of average
node degree of 5 which in turn increases the rate of
infection significantly. Comparing Fig. 4(c) and Fig. 5(c),
we observe that infection penetration is higher in the latter
even in presence of a smaller value of
. In Fig. 5(c), it
shows that even with a low value of
, the infection still
rises to above 60%.
Therefore, we observe that network connectivity has
a high impact on the infection propagation and on the
speed of reaching the maximal point of outbreak. However,
thereafter during the recover phase,
0
affects aggressively
the time it takes to recover the whole network.
Related Work
The mathematical modeling of epidemics is well documented
[2], [7]. In fact, visualizing the population as a
complex network of interacting individuals has resulted in
the analysis of epidemics from a network or graph theoretic
point of view [8], [9], [10].
Node compromise in sensor networks and the need for
their security has also received immense attention [4]. A
large portion of current research on security in sensor
networks has been focused on protocols and schemes for
securing the communication between nodes [12], [13].
Revocation of keys of compromised nodes has been studied
in [14]. In [4], the authors demonstrate the ease with which
a sensor node can be compromised and all its information
extracted. Unfortunately, little work has been done on the
defense strategies when the compromise of a single node
could be used to compromise other nodes over the air. In
this paper, we take the first step to model this potential
disastrous propagation. In [6], the authors used an epidemic
modeling technique for information dissemination in
a MANET. However, they assumed homogeneous mixing
which is not possible in a static sensor network as ours.
In our work, we adopted some of the results presented in
[8] where the author proposes a percolation theory based
evaluation of the spread of an epidemic on graphs with
given degree distributions. However, little has been shown
there on the temporal dynamics of the epidemic spread and
the authors only studied the final outcome of an infection
spread.
Conclusion
In this paper, we investigate the potential threat for compromise
propagation in wireless sensor networks. Based
on epidemic theory, we model the process of compromise
spreading from a single node to the whole network. In
particular, we focus on the key network parameters that
determine a potential epidemic outbreak in the network.
Due to the unique distance and key sharing constrained
communication pattern, we resort to a random graph model
which is precisely generated according to the parameters of
the real sensor network and perform the study on the graph.
Furthermore, we introduce the effect of node recovery after
compromise and adapt our model to accommodate this
effect. Our results reveal key network parameters in defending
and containing potential epidemics. In particular,
6
Proceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM'06)
0-7695-2593-8/06 $20.00 2006
IEEE
0
100
200
300
400
500
600
700
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
Time(t)
Fraction of Network
Dynamics of the Infected, Susceptible and Revoked nodes
Susceptible S(t)
Infected I(t)
Recovered R(t)
Average Degree z = 3
Infection Prob
= 0.6
(a)
0
= 30
0
100
200
300
400
500
600
700
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
Time(t)
Fraction of Network
Dynamics of the Infected, Susceptible and Revoked nodes
S(t)
I(t)
R(t)
z = 3
= 0.7
(b)
0
= 20
0
100
200
300
400
500
600
700
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
Time(t)
Fraction of Network
Dynamics of the Infected, Susceptible and Revoked nodes
S(t)
I(t)
R(t)
z = 3
= 0.9
(c)
0
= 10
Fig. 4. The dynamics of the population with recovery for average degree of 3
0
100
200
300
400
500
600
700
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
Time(t)
Fraction of Network
Dynamics of the Infected, Susceptible and Revoked nodes
Susceptible S(t)
Infected I(t)
Recovered R(t)
Average Degree z = 10
Infection Prob
= 0.15
(a)
0
= 30
0
100
200
300
400
500
600
700
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
Time(t)
Fraction of Network
Dynamics of the Infected, Susceptible and Revoked nodes
S(t)
I(t)
R(t)
z = 10
= 0.17
(b)
0
= 20
0
100
200
300
400
500
600
700
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
Time(t)
Fraction of Network
Dynamics of the Infected, Susceptible and Revoked nodes
S(t)
I(t)
R(t)
z = 10
= 0.2
(c)
0
= 10
Fig. 5. The dynamics of the population with recovery for average degree of 10
the result provides benchmark time period for the network
to recover a node in order to defend against the epidemic
spreading. Our extensive simulation results validate our
analyses and moreover, provide insights of the dynamics
of the system in terms of temporal evolution.
References
[1] I Akyildiz, W. Su, Y Sankarasubramaniam, and E. Cayirci, "A
Survey on sensor networks," IEEE Communications Magazine, vol.
40, no. 8, 2002.
[2] R. M. Anderson and R. M. May, "Infectious Diseases of Human:
Dynamics and Control" (Oxford Univ. Press, Oxford, 1991).
[3] S. Staniford, V. Paxson, and N. Weaver. "How to Own the Internet
in Your Spare Time". In 11th Usenix Security Symposium, San
Francisco, August, 2002.
[4] C. Hartung, J. Balasalle, and R. Han, "Node Compromise in Sensor
Networks: The Need for Secure Systems", Technical Report CU-CS
-990-05 (2005).
[5] H. Chan, V. D. Gligor, A. Perrig, G. Muralidharan, "On the Distribution
and Revocation of Cryptographic Keys in Sensor Networks",
IEEE Transactions on Dependable and Secure Computing 2005.
[6] A. Khelil, C. Becker, J. Tian, K. Rothermel, "An Epidemic Model
for Information Diffusion in MANETs", MSWiM 2002, pages 54-60
.
[7] N. T. J. Bailey, "The Mathematical Theory of Infectious Diseases
and its Applications". Hafner Press, New York (1975)
[8] M. E. J. Newman, "Spread of epidemic disease on networks", Phys.
Rev. E, 66 (2002), art. no. 016128.
[9] C. Moore and M. E. J. Newman, "Epidemics and percolation in
small- world networks". Phys. Rev. E 61, 5678-5682 (2000)
[10] P. Grassberger, "On the critical behavior of the general epidemic
process and dynamic percolation", Math. Biosc. 63 (1983) 157.
[11] L Eschenauer and V. D. Gligor. "A key-management scheme for
distributed sensor networks", in Proc. of the 9th Computer Communication
Security - CCS '02, pages 4147, Washington D.C.,
USA, November 2002.
[12] H. Chan, A. Perrig, and D. Song, "Random key predistribution
schemes for sensor networks", in Proc. of the IEEE Symposium
on Research in Security and Privacy - SP '03, pages 197215,
Washington D.C., USA, May 2003.
[13] Donggang Liu and Peng Ning, "Establishing pairwise keys in
distributed sensor networks", in Proc. of the 10th ACM Conference
on Computer and Communications Security - CCS '03, pages 52
61, Washington D.C., USA, October 2003.
[14] H. Chan; V.D. Gligor, A. Perrig, G. Muralidharan, "On the Distribution
and Revocation of Cryptographic Keys in Sensor Networks",
IEEE Transactions on Dependable and Secure Computing, Volume
2, Issue 3, July-Sept. 2005
[15] A. Chadha, Y. Liu. and S. Das, "Group key distribution via local
collaboration in wireless sensor networks," in Proceedings of the
IEEE SECON 2005, Santa Clara, CA, Sept. 2005.
7
Proceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM'06)
0-7695-2593-8/06 $20.00 2006
IEEE | Random Key Predistribution;Sensor Networks;Random Graph;Epidemiology |
141 | Modelling Adversaries and Security Objectives for Routing Protocols in Wireless Sensor Networks | The literature is very broad considering routing protocols in wireless sensor networks (WSNs). However, security of these routing protocols has fallen beyond the scope so far. Routing is a fundamental functionality in wireless networks, thus hostile interventions aiming to disrupt and degrade the routing service have a serious impact on the overall operation of the entire network. In order to analyze the security of routing protocols in a precise and rigorous way, we propose a formal framework encompassing the definition of an adversary model as well as the "general" definition of secure routing in sensor networks. Both definitions take into account the feasible goals and capabilities of an adversary in sensor environments and the variety of sensor routing protocols. In spirit, our formal model is based on the simulation paradigm that is a successfully used technique to prove the security of various cryptographic protocols. However, we also highlight some differences between our model and other models that have been proposed for wired or wireless networks. Finally, we illustrate the practical usage of our model by presenting the formal description of a simple attack against an authenticated routing protocol, which is based on the well-known TinyOS routing. | INTRODUCTION
Routing is a fundamental function in every network that
is based on multi-hop communications, and wireless sensor
networks are no exceptions.
Consequently, a multitude
of routing protocols have been proposed for sensor
networks in the recent past. However, most of these protocols
have not been designed with security requirements in
mind. This means that they can badly fail in hostile environments
. Paradoxically, research on wireless sensor networks
have been mainly fuelled by their potential applications in
military settings where the environment is hostile. The natural
question that may arise is why then security of routing
protocols for sensor networks has fallen beyond the scope of
research so far.
We believe that one important reason for this situation
is that the design principles of secure routing protocols for
wireless sensor networks are poorly understood today. First
of all, there is no clear definition of what secure routing
should mean in this context. Instead, the usual approach,
exemplified in [10], is to list different types of possible attacks
against routing in wireless sensor networks, and to
define routing security implicitly as resistance to (some of)
these attacks. However, there are several problems with this
approach. For instance, a given protocol may resist a different
set of attacks than another one. How to compare these
protocols? Shall we call them both secure routing protocols
? Or on what grounds should we declare one protocol
more secure than another? Another problem is that it is
quite difficult to carry out a rigorous analysis when only a
list of potential attack types are given. How can we be sure
that all possible attacks of a given type has been considered
in the analysis? It is not surprising that when having such
a vague idea about what to achieve, one cannot develop the
necessary design principles. It is possible to come up instead
with some countermeasures, similar to the ones described in
[10], which are potentially usefully to thwart some specific
types of attacks, but it remains unclear how to put these
ingredients together in order to obtain a secure and efficient
routing protocol at the end.
In order to remedy this situation, we propose to base the
design of secure routing protocols for wireless sensor networks
on a formal security model.
While the benefit of
formal models is not always clear (indeed, in some cases,
they tend to be overly complicated compared to what they
achieve), we have already demonstrated their advantages
in the context of ad hoc network routing protocols. More
specifically, we developed formal security models in [4, 1, 2],
and we successfully used them to prove the security of some
49
ad hoc network routing protocols, and to find security holes
in others. The idea here is to use the same approach in the
context of wireless sensor networks. The rationale is that
routing protocols in sensor networks are somewhat similar
to those in ad hoc networks, hence they have similar pitfalls
and they can be modeled in a similar way.
Thus, in this paper, we present a formal model, in which
security of routing is precisely defined, and which can serve
as the basis for rigorous security analysis of routing protocols
proposed for wireless sensor networks. Our model is based
on the simulation paradigm, where security is defined in
terms of indistinguishability between an ideal-world model
of the system (where certain attacks are not possible by
definition) and the real-world model of the system (where
the adversary is not constrained, except that he must run in
time polynomial). This is a standard approach for defining
security, however, it must be adopted carefully to the specific
environment of wireless sensor networks.
Similar to [4], in this paper, we develop an adversary
model that is different from the standard Dolev-Yao model,
where the adversary can control all communications in the
system. In wireless sensor networks, the adversary uses wireless
devices to attack the systems, and it is more reasonable
to assume that the adversary can interfere with communications
only within its power range. In addition, we must
also model the broadcast nature of radio communications.
However, in addition to the model described in [4], here
we take into account that there are some attacks which exploit
the constraint energy supply of sensor nodes (e.g., the
adversary decreases the network lifetime by diverting the
traffic in order to overload, and thus, deplete some sensor
nodes). Hence, we explicitly model the energy consumption
caused by sending a message between each pair of nodes in
the network.
Another difference with respect to the model of [4] lies in
the definition of the outputs of the ideal-world and the real-world
models. It is tempting to consider the state stored
in the routing tables of the nodes as the output, but an
adversary can distort that state in unavoidable ways. This
means that if we based our definition of security on the indistinguishability
of the routing states in the ideal-world and
in the real-world models, then no routing protocol would
satisfy it. Hence, we define the output of the models as a
suitable function of the routing state, which hides the unavoidable
distortions in the states. This function may be different
for different types of routing protocols, but the general
approach of comparing the outputs of this function in the
ideal-world and in the real-world models remain the same.
For instance, this function could be the average length of the
shortest pathes between the sensor nodes and the base station
; then, even if the routing tables of the nodes would not
always be the same in the ideal-world and in the real-world
models, the protocol would still be secure given that the
difference between the distributions of the average length of
the shortest pathes in the two models is negligibly small.
The rest of the paper is organized as follows: In Section 2,
we present the elements of our formal model, which includes
the presentation of the adversary model adopted to wireless
sensor networks, the description of the ideal-world and the
real-world models, the general definition of the output of
these models, as well as the definition of routing security.
Then, in Section 3, we illustrate the usage of our model by
representing in it a known insecurity of an authenticated
version of the TinyOS routing protocol.
Finally, in Section
4, we report on some related work, and in Section 5, we
conclude the paper.
We must note that the work described in this paper is a
work in progress, and it should be considered as such. In
particular, the reader will not find security proofs in this
paper. There are two reasons for this: first, we are still
developing the proof techniques, and second, we have not
identified yet any routing protocols that would be secure in
our model.
THE MODEL OF WIRELESS SENSOR NETWORKS
The adversary is represented by adversarial nodes in the
network. An adversarial node can correspond to an ordinary
sensor node, or a more resourced laptop-class device.
In the former case, the adversary may deploy some corrupted
sensor-class devices or may capture some honest sensor
nodes. In the latter case, he has a laptop-class device
with a powerful antenna and unconstrained energy supply.
All of these adversarial nodes may be able to communicate in
out-of-band channels (e.g., other frequency channel or direct
wired connection), which may be used to create wormholes.
In general, when capturing honest sensor nodes, the adversary
may be able to compromise their cryptographic secrets
(assuming that such secrets are used in the system).
However, in this paper, we assume that the adversary cannot
compromise cryptographic material. This is certainly
a simplifying assumption, and we intend to relax it in our
future work.
The adversary attacking the routing protocol primarily
intends to shorten the network lifetime, degrade the packet
delivery ratio, increase his control over traffic, and increase
network delay. Some of these goals are highly correlated;
e.g., increasing hostile control over traffic may also cause
the network delay to be increased.
In order to achieve the aforementioned goals, the adversary
is able to perform simple message manipulations: fab-ricated
message injection, message deletion, message modification
and re-ordering of message sequences. In the followings
, we describe how the adversary can perform message
deletion and injection in a wireless sensor network.
Re-ordering of message sequences is straightforward using
message deletion and insertion, thus, we do not elaborate it
further.
Basically, an adversarial node can affect the communication
of two honest nodes in two cases: In the first case, an adversarial
node relays messages between honest nodes which
are not able to communicate directly with each other. In
the second case, the honest nodes can also reach each other,
and the adversarial node can also hear the nodes' communication
, i.e., he can send and receive messages to/from both
honest nodes. We further assume that communication range
implies interference range, and vice-versa.
In case of adversarial relaying of messages between the
nodes, all of the message manipulations are quite straightforward
. On the contrary, if the honest nodes can also communicate
with each other, message manipulations must be
performed in a very sophisticated way. The adversarial node
can inject messages easily, but deletion and modification re-50
quire jamming capability. Message deletion may be achieved
by employing various selective jamming techniques against
either the sender node or the receiver node. Message modification
is only feasible, if both the sender and the receiver
nodes are within the communication range of the adversarial
node. Here, we sketch two scenarios for message modification
, which are illustrated on Figure 1.
By these simple
examples, we intend to point out the feasibility of message
modification assuming even direct communication between
the sender and the receiver node.
Scenario 1: There are two honest nodes X and Y , and
node X intends to send a message m to node Y . A
1
and A
2
are adversarial nodes, where A
2
is able to interfere with Y 's
communication, but not with X's and A
1
's communication.
Let A
1
be in the communication range of X and Y , whereas
A
2
can only communicate with Y . When X transmits m to
Y , node A
1
overhears m, meanwhile A
2
performs jamming
to cause Y not to be able to receive m. In order to take
this action, A
1
and A
2
are connected by an out-of-band
channel, thus, A
1
can send a signal to A
2
when A
2
should
start jamming Y 's communication. It is also feasible that
A
2
performs constant jamming for a certain amount of time,
afterwards, A
1
can send the modified message m to Y .
Scenario 2: In this scenario, there is only one adversarial
node denoted by A. We assume that transmitting a message
from the routing sublayer consists of passing the message to
the data-link layer, which, after processing the message, also
passes it further to the physical layer. The data-link layer
uses CRC in order to provide some protection against faults
in noisy channels; a sender generally appends a frame check
sequence to each frame (e.g., see [7]). The adversary can
exploit this CRC mechanism to modify a message in the
following way (illustrated on Figure 1). When X transmits
message m to Y , node A also overhears m, in particular,
he can see the frame(s) belonging to m. A intends to modify
message m. Here, we must note that most messages
originated from the routing sublayer are composed of only
one frame per message in the data-link layer due to performance
reasons, especially when they are used to discover
routing topology. Upon reception of the frame corresponding
to the message, the adversary can corrupt the frame
check sequence by jamming once the data field of the frame
has been received. This causes node Y to drop the frame
(and the message), since Y detects that the last frame is incorrect
, and waits for retransmission. At this point, if some
acknowledgement mechanism is in use, A should send an acknowledgement
to X so that it does not re-send the original
frame. In addition, A retransmits message m in the name
of X, where m is the modified message.
The feasibility of jamming attacks is studied and demonstrated
in [17]. Although, the authors conclude in that paper
that the success of jamming attacks mainly depend on the
distance of the honest nodes and the jammer node, various
jamming techniques has been presented there that can
severely interfere with the normal operation of the network.
2.2
Network model
We assume that each honest device has exactly one antenna
in the network. If the adversary uses several antennas
we represent each of them by a distinct node. The network
nodes are considered to be static, and we further assume
that there is a single base station in the network.
Let us denote the honest nodes in the network by
v
0
, v
1
, . . . , v
k
, where v
0
denotes the base station. Similarly,
v
k+1
, . . . , v
k+m
represent the adversarial nodes. The set of
all nodes is denoted by V . Furthermore, n denotes the number
of all nodes in the network, i.e., n = |V | = k + m + 1.
For each pair of nodes v
i
and v
j
, we define e
v
i
,v
j
to be the
energy level needed to transmit a message from v
i
to v
j
,
where v
i
, v
j
V . This values can be ordered in a matrix
with size n n, called reachability matrix, and it is denoted
by E.
1
In the rest, if we intend to emphasize the distinction
between the honest and the adversarial nodes in the notation
, we prefer to denote the adversarial nodes by v
1
, . . . , v
m
(where v
= v
k+
, 1
m).
For the sake of simplicity, we also assume that at least
energy e
v
i
,v
j
is needed for node v
i
to interfere with node
v
j
's packet reception. This means that if v
i
can reach v
j
,
then v
i
can also interfere with all the communication of v
j
.
Let us assume that each node uses a globally unique
identifier in the network, and these identifiers are authenticated
in some way (e.g., by symmetric keys).
We denote
the set of these identifiers by L, and there is a function
L : V L {undef} that assigns an identifier to
each node, where undef /
L. According to our adversary
model described in Subsection 2.1, we assume that the adversary
has no (authenticated) identifier in the network, i.e.,
L(v
j
) = undef for all 1
j m.
We also introduce a cost function
C : V R, which assigns
a cost to each node (e.g., the remaining energy in the
battery, or constant 1 to each node in order to represent
hop-count).
Configuration:
A configuration conf is a quadruple
(V, L, E, C) that consists of the set of nodes, the labelling
function, the reachability matrix, and the cost function of
nodes.
2.3
Security objective function
Diverse sensor applications entail different requirements
for routing protocols. For instance, remote surveillance applications
may require minimal delay for messages, while
sensor applications performing some statistical measurements
favour routing protocols prolonging network lifetime.
The diversity of routing protocols is caused by these conflicting
requirements: e.g., shortest-path routing algorithms
cannot maximize the network lifetime, since always choosing
the same nodes to forward messages causes these nodes to
run out of their energy supply sooner. Several sensor routing
protocols use a trade-off to satisfy conflicting requirements
[16, 11].
This small argument also points out that one cannot judge
the utility of all routing protocols uniformly. Without a unified
metric of utility we cannot refine our security objectives
for routing protocols. By the above argument, a routing
protocol that is secure against attacks aiming at decreasing
network-lifetime cannot be secure against attacks aiming at
increasing network delay. We model the negatively correlated
requirements of routing, and essentially, our security
objectives in a very general manner. We represent the output
of a routing protocol, which is actually the ensemble of
the routing entries of the honest nodes, with a given con-1
In this paper, the rows and the columns of all matrices are
numbered from zero.
51
Node X
Node A
2
Node A
1
Node Y
1: m
1: jam
1: m
2: m
Node X
Node Y
Node A
1: m
1: m
1: jam
2: m
Figure 1: Message modification performed by the cooperation of two adversarial nodes A
1
and A
2
(on the
right-hand side) in Scenario 1, and employing overhearing, jamming, and relaying with a single adversarial
node A (on the left-hand side) in Scenario 2. Honest nodes are labelled by X and Y . Arrows between nodes
illustrate the direction of communication, the sequence of message exchanges are also depicted on these
arrows. Dashed arrows illustrate failed message delivery caused by jamming.
figuration conf by a matrix T
conf
with size k + 1 k + 1.
2
T
conf
i,j
= 1, if honest node v
i
sends every message to an honest
node identified by
L(v
j
) in order to deliver the message
to the base station, otherwise let T
conf
i,j
be 0. In the rest of
the paper, we shortly refer to the result of a routing protocol
with a given configuration as a routing topology, which can
be considered as a directed graph described by matrix T
conf
.
In the following, we will omit the index conf of T when the
configuration can be unambiguously determined in a given
context. In fact, T
conf
is a random variable, where the ran-domness
is caused by the sensor readings initiated randomly
by the environment, processing and transmission time of the
sensed data, etc.
Let us denote the set of all configurations by
G. Furthermore
,
T denotes the set of the routing topologies of all configurations
. The security objective function
F : G T R
assigns a real number to a random routing topology of a configuration
. This function intends to distinguish "attacked"
topologies from "non-attacked" topologies based on a well-defined
security objective. We note that the definition of
F
is protocol dependent. For example, let us consider routing
protocols that build a routing tree, where the root is
the base station. We can compare routing trees based on
network lifetime by the following security objective function
F(conf , T
conf
) = 1
k
k
X
i=1
E(v
i
, conf , T
conf
)
where
E : V G T R assigns the overall energy consumption
of the path from a node v
i
to v
0
(the base station)
in a routing tree of a configuration. Since T
conf
is a random
variable, the output of
F is a random variable too. If the
distribution of this output in the presence of an attacker
non-negligibly differs from the distribution when there's no
attacker, then the protocol is not secure. If we intend to
compare routing trees based on network delay a simple security
objective function may be
F(conf , T
conf
) = 1
k
k
X
i=1
M(v
i
, conf , T
conf
)
where
M : V G T R assigns the length of the path
from a node to v
0
in a routing topology of a configuration.
2
Of course, here we only consider the result of the protocol
with respect to the honest nodes, since the adversarial nodes
may not follow the protocol rules faithfully.
2.4
Dynamic model
Following the simulation paradigm, we define a real-world
model and an ideal-world model. The real-world model represents
the real operation of the protocol and the ideal-world
model describes how the system should work ideally. Both
models contain an adversary. The real-world adversary is
not constrained apart from requiring it to run in time polynomial
. This enables us to be concerned with arbitrary feasible
attacks. In addition, the ideal-world adversary is constrained
in a way that it cannot modify messages and inject
extra ones due to the construction of the ideal-world system
. In other words, all attacks that modify or inject any
messages is unsuccessful in the ideal-world system. However
, the ideal-world adversary can perform attacks that are
unavoidable or very costly to defend against (e.g., message
deletion).
Once the models are defined, the goal is to prove that
for any real-world adversary, there exist an ideal-world adversary
that can achieve essentially the same effects in the
ideal-world model as those achieved by the real-world adversary
in the real-world model (i.e., the ideal-world adversary
can simulate the real-world adversary).
2.4.1
Real-world model
The real-world model that corresponds to a configuration
conf = (V, L, E, C) and adversary A is denoted by sys
real
conf ,A
,
and it is illustrated on Figure 2. We model the operation
of the protocol participants by interactive and probabilistic
Turing machines. Correspondingly, we represent the adversary
, the honest sensor nodes, and the broadcast nature of
the radio communication by machines A, M
i
, and C, respectively
. These machines communicate with each other
via common tapes.
Each machine must be initialized with some input data
(e.g., cryptographic keys, reachability matrix, etc.), which
determines its initial state. Moreover, the machines are also
provided with some random input (the coin flips to be used
during the operation). Once the machines have been initialized
, the computation begins. The machines operate in
a reactive manner, i.e., they need to be activated in order
to perform some computation.
When a machine is activated
, it reads the content of its input tapes, processes the
received data, updates its internal state, writes some output
on its output tapes, and goes back to sleep. The machines
are activated in rounds by a hypothetic scheduler, and each
machine in each round is activated only once. The order of
activation is arbitrary with the only restriction that C must
be activated at the end of the rounds.
52
Now, we present the machines in more details:
Machine C. This machine is intended to model the radio
communication. It has input tapes out
i
and out
j
,
from which it reads messages written by M
i
and A,
resp. It also has output tapes in
i
and in
j
, on which it
writes messages to M
i
and A, resp. C is also initialized
by matrix E at the beginning of the computation.
Messages
on
tape
out
i
can
have
the
format
(
sndr
, cont , e, dest), where
sndr
L is the identifier of
the sender, cont is the message content, e is the energy
level to be used to determine the range of transmission,
and dest is the identifier of the intended destination
dest
L {}, where indicates broadcast message.
Messages on tape out
j
can have the following formats:
(MSG,
sndr
, cont, e, dest ): MSG message models a
normal broadcast message sent by the adversary
to machine C with sender identifier
sndr
L,
message content cont , energy level e, and identifier
of the intended destination dest
L {}.
(JAM, e): Special JAM message, that is sent by
the adversary to machine C, models the jamming
capability of the adversary. When machine C receives
a message JAM, it performs the requested
jamming by deleting all messages in the indicated
range e around the jamming node, which
means that those deleted messages are not delivered
to the nodes (including the jammer node
itself) within the jamming range.
(DEL,
tar
, e): Special DEL message, that is sent
by the adversary to machine C, models the modification
capability of the adversary. When receiving
a message DEL with identifier
tar
L,
machine C does not deliver any messages sent by
node v
V , where L(v ) =
tar
, if v is within
the indicated range e, except the adversarial node
itself that will receive the deleted messages. This
models the sophisticated jamming technique that
we described in Subsection 2.1.
In a more formal way, when reading a message msg
in
=
(MSG,
sndr
, cont, e, dest) from out
j
, C determines the
nodes which receive the message by calculating the set
of nodes V
e
V , such that for all v V
e
e
v
j
,v
e.
Finally, C processes msg
in
as follows.
1. if dest
L {}, then C writes
msg
out
= (
sndr
, cont, dest ) to the input tapes
of machines corresponding to honest nodes in
V
e
msg
out
= (MSG,
sndr
, cont, dest ) to the input
tapes of machines corresponding to adversarial
nodes in V
e
\ {v
j
}
2. otherwise C discards msg
in
When reading a message msg
in
= (JAM, e) from out
j
,
C determines the set of nodes which receive the message
by calculating V
e
V , such that for all v V
e
e
v
j
,v
e. Afterwards, C does not write any messages
within the same round to the input tapes of machines
corresponding to V
e
.
When reading a message msg
in
= (DEL,
tar
, e) from
out
j
, C determines the set of nodes which receive the
message by calculating V
e
V , such that for all v
V
e
e
v
j
,v
e. Finally, C processes msg
in
as follows.
1. if there exists v
x
V
e
(1
x k), such that
L(v
x
) =
tar
, then C does not write any messages
within the same round from tape out
x
to the input
tapes of machines corresponding to V
e
\ {v
j
}
2. otherwise C discards msg
in
When reading a message msg
in
= (
sndr
, cont, e, dest )
from out
i
, C determines the set of nodes which receive
the message by calculating V
e
V , such that for
all v V
e
e
v
j
,v
e. Finally, C processes msg
in
as
follows.
1. if dest
L {}, then C writes
msg
out
= (
sndr
, cont, dest) to the input tapes
of machines corresponding to honest nodes in
V
e
\ {v
i
}
msg
out
= (MSG,
sndr
, cont, dest ) to the input
tapes of machines corresponding to adversarial
nodes in V
e
2. otherwise C discards msg
in
Machine M
i
. This machine models the operation of
honest sensor nodes, and it corresponds to node v
i
.
It has input tape in
i
and output tape out
i
, which
are shared with machine C.
The format of input
messages must be (
sndr
, cont, dest ), where dest
L {}.
The format of output messages must be
(
sndr
, cont, e, dest ), where
sndr
must be
L(v
i
), dest
L {}, and e indicates the transmission range of the
message for C. When this machine reaches one of its
final states or there is a time-out during the computation
process, it outputs its routing table.
Machine A. This machine models the adversary logic.
Encapsulating each adversarial node into a single machine
allows us to model wormholes inside A. One can
imagine that the adversary deploy several antennas in
the network field, which are connected to a central adversary
logic. In this convention, node v
j
corresponds
to an adversarial antenna, which is modelled by input
tape in
j
and output tape out
j
. These tapes are shared
with machine C.
The format of input messages must be msg
in
=
(MSG,
sndr
, cont , e, dest), where dest L {}.
The format of output messages msg
out
can be
(MSG,
sndr
, cont, e, dest), where dest L {}
and e indicates the transmission range of the message
;
(JAM, e), where e indicates the range of jamming;
(DEL,
tar
, e), where e indicates the range of selective
jamming, and
tar
L.
The computation ends, when all machines M
i
reach their
final states, or there is a time-out. The output of sys
real
conf ,A
is the value of the security objective function
F applied to
the resulted routing topology defined in Subsection 2.3 and
configuration conf . The routing topology is represented by
53
C
M
0
...
in
0
out
0
in
1
out
1
in
2
out
2
A
M
1
out
1
in
1
M
k
out
k
in
k
in
m
out
m
...
C
M
0
...
in
0
out
0
in
1
out
1
in
2
out
2
M
1
out
1
in
1
M
k
out
k
in
k
in
m
out
m
...
A
Figure 2: The real-world model (on the left-hand side) and the ideal-world model (on the right-hand side).
the ensemble of the routing entries of machines M
i
. We
denote the output by Out
real,F
conf ,A
(r), where r is the random
input of the model. In addition, Out
real,F
conf ,A
will denote the
random variable describing Out
real,F
conf ,A
(r) when r is chosen
uniformly at random.
2.4.2
Ideal-world model
The ideal-world model (illustrated on Figure 2) that corresponds
to a configuration conf = (V, L, E, C) and adversary
A is denoted by sys
ideal
conf ,A
. The ideal-world model is identical
to the real-world model with the exception that the
ideal-world adversary cannot modify and inject extra messages
. However, he is allowed to simply drop any messages
or perform jamming, since these attacks are unavoidable, or
at least, they are too costly to defend against. Our model
is considered to be ideal in this sense. Comparing to the
real-world model, we replace machine C with machine C
and machine A with machine A in order to implement our
restricted ideal-world adversary. Hence, we only detail the
operation of C and A here, since M
i
are the same as in the
real-world model.
Receiving an MSG message from machines M
i
, C internally
stores that message with a unique message identifier
in its internal store. Delivering any MSG message to A ,
C also includes the message identifier into the message. A
can send an MSG message to C with a different format; it
only contains an identifier id and an energy level e. Upon
the reception of such a message, C searches for the original
message which is associated with identifier id in its internal
store, and delivers this stored message using the energy level
e. Although A also receives the original message with its
associated identifier from C , he is not able to modify that,
since C only accepts a message identifier issued by himself
and an energy level from A . In other words, A can only
delete messages, since A can also send special DEL and JAM
messages to C . We elaborate the operation of C and A in
a more formal way as follows.
A and C communicate via tapes in
j
and out
j
.
Machine C . It has input tapes out
i
and out
j
, from
which it reads messages written by M
i
and A, resp. It
also has output tapes in
i
and in
j
, on which it writes
messages to M
i
and A, resp. C is also initialized by
matrix E. In addition, it sets its internal variable id
C
to 1 at the beginning of the computation.
C interacts with machines M
i
in a similar way as C
does in the real-world model; when reading a message
msg
in
= (
sndr
, cont, e, dest ) from out
i
, C processes
msg
in
identically to C in the real-world model
only with one exception:
Before writing msg
in
=
(MSG, id
C
,
sndr
, cont, dest) to output tapes in
j
, C
internally stores msg
in
in set S. After writing msg
in
to output tapes in
j
, C increments id
C
by one. Therefore
, C knows what messages are passed to A from M
i
.
Messages on out
j
can have the formats:
(MSG, id, e):
MSG message models a normal
broadcast message sent by the ideal-world adversary
to machine C , where e indicates the transmission
range of the message identified by id.
(JAM, e): Special JAM message, that is sent by
the adversary to machine C, models the jamming
capability of the ideal-world adversary, where e
indicates the range of jamming.
(DEL,
tar
, e): Special DEL message, that is sent
by the adversary to machine C, models the modification
capability of the ideal-world adversary,
where e indicates the range of selective jamming,
and
tar
L.
When reading a message msg
in
= (MSG, id, e) from
out
j
, machine C operates differently from C. C determines
the set of nodes which receive the message by
calculating V
e
V , such that for all v V
e
e
v
j
,v
e.
Finally, C processes msg
in
as follows.
1. if 1
id id
C
, then C searches the msg =
(MSG, id ,
sndr
, cont , dest ) in S such that id
equals to id, and C writes
msg
out
= (
sndr
, cont , dest ) to the input
tapes of machines corresponding to honest
nodes in V
e
msg
out
= (MSG, id ,
sndr
, cont , dest ) to the
input tapes of machines corresponding to adversarial
nodes in V
e
\ {v
j
}
2. otherwise C discards msg
in
When reading a message msg
in
= (JAM, e) or msg
in
=
(DEL,
tar
, e) from out
j
, machine C operates the same
54
way as C does in case of the corresponding message
formats.
Machine A . It has output tapes out
j
and input tapes
in
j
. The format of messages on input tape in
j
must
be msg
in
= (MSG, id,
sndr
, cont, e, dest), where dest
L {}.
The format of output messages msg
out
can be
(MSG, id, e), where id is a message identifier and
e indicates the transmission range of the message
identified by id;
(JAM, e), where e indicates the range of jamming;
(DEL,
tar
, e), where e indicates the range of selective
jamming, and
tar
L.
The computation ends, when all machines M
i
reach their
final states, or there is a time-out. Similar to the real-world
model, the output of sys
ideal
conf ,A
is the value of the security
objective function
F applied to the resulted routing topology
and configuration conf . The routing topology is represented
by the ensemble of the routing entries of machines M
i
. We
denote the output by Out
ideal,F
conf ,A
(r), where r is the random
input of the model. Moreover, Out
ideal,F
conf ,A
will denote the
random variable describing Out
ideal,F
conf ,A
(r) when r is chosen
uniformly at random.
2.5
Definition of secure routing
Let us denote the security parameter of the model by
(e.g., is the key length of the cryptographic primitive employed
in the routing protocol, such as digital signature,
MAC, etc.). Based on the model described in the previous
subsections, we define routing security as follows:
Definition 1
(Statistical security). A
routing
protocol is statistically secure with security objective function
F, if for any configuration conf and any real-world
adversary
A, there exists an ideal-world adversary A ,
such that Out
real,F
conf ,A
is statistically indistinguishable from
Out
ideal,F
conf ,A
.
Two random variables are statistically indistinguishable
if the L
1
distance of their distributions is a
negligible function of the security parameter .
Intuitively, if a routing protocol is statistically secure,
then any system using this routing protocol cannot satisfy
its security objectives represented by function
F only with
a probability that is a negligible function of .
This negligible probability is related to the fact that the
adversary can always forge the cryptographic primitives
(e.g., generate a valid digital signature) with a very small
probability depending on the value of .
INSECURITY OF TINYOS ROUTING
In this section, we present an authenticated routing mechanism
based on the well-known TinyOS routing, and we
show that it is not secure in our model for a given security
objective function representing a very minimal security
requirement.
3.1
Operation of an authenticated routing
protocol
Originally, the authors of TinyOS implemented a very
simple routing protocol, where each node uses a globally
unique identifier. The base station periodically initiates a
routing topology discovery by flooding the network by a beacon
message. Upon reception of the first beacon within a
single beaconing interval, each sensor node stores the identifier
of the node, from which it received the beacon, as its
parent (aka. next-hop towards the base station), and then
re-broadcasts the beacon after changing the sender identifier
to its own identifier. As for each node only one parent
is stored, the resulted routing topology is a tree.
Every
sensor node receiving a data packet forwards that towards
the base station by sending the packet to its parent.
A
lightweight cryptographic extension is employed in [14] in
order to authenticate the beacon by the base station. This
authenticated variant of TinyOS routing uses Tesla scheme
to provide integrity for the beacon; each key is disclosed by
the next beacon in the subsequent beaconing interval. We
remark that this protocol has only been defined informally
that inspired us to present a new protocol, which provides
the "same" security as the authenticated routing protocol in
[14], but due to its simplicity it fits more in demonstrating
the usage of our model. Consequently, the presented attack
against this new protocol also works against the protocol in
[14]. We must note again that this protocol is only intended
to present the usefulness of our model rather than to be
considered as a proposal of a new sensor routing protocol.
We assume that the base station B has a public-private
key pair, where the public key is denoted by K
pub
. Furthermore
, it is assumed that each sensor node is also deployed
with K
pub
, and they are capable to perform digital signature
verification with K
pub
as well as to store some beacons in
its internal memory. We note that B never relays messages
between sensor nodes.
Initially, B creates a beacon, that contains a constant message
identifier BEACON, a randomly generated number rnd,
the identifier of the base station Id
B
, and a digital signature
sig
B
generated on the previous elements except Id
B
. Afterwards
, the base station floods the network by broadcasting
this beacon:
B
:
msg
1
= (BEACON, rnd, Id
B
, sig
B
)
Each sensor node X receiving msg
1
checks whether it has
already received a beacon with the same rnd in conjunction
with a correct signature before. If it is true, the node discards
msg
1
, otherwise it verifies sig
B
. If the verification is
successful, then X sets Id
B
as its parent, stores msg
1
in its
internal memory, and re-broadcasts the beacon by changing
the sender identifier Id
B
to its own identifier Id
X
:
X
:
msg
2
= (BEACON, rnd, Id
X
, sig
B
)
If the signature verification is unsuccessful, then X discards
msg
1
. Every sensor node receiving msg
2
performs the same
steps what X has done before.
Optionally, B can initiate this topology construction periodically
by broadcasting a new beacon with different rnd.
In the rest, we shortly refer to this protocol as ABEM
(Authenticated Beaconing Mechanism).
3.2
Formalization of a simple attack
A simple security objective is to guarantee the correctness
of all routing entries in the network. Namely, it is desirable
that a sender node v
i
is always able to reach node v
j
, if v
i
set
L(v
j
) as its parent identifier earlier. It means that if
55
node v
i
sets node
L(v
j
) as its parent identifier, then E
i,j
should contain a finite value, or v
i
as well as v
j
should have
an adversarial neighboring node v
1
and v
2
, resp., such that
E
i,k+
1
and E
k+
2
,j
are finite values, where 1
1
,
2
m
and
1
=
2
may hold.
In order to formalize this minimal security requirement,
we introduce the following security objective function
F
ABEM
(conf , T ) =
8
<
:
1, if i, j : T
i,j
E
i,j
`Q
m=1
E
i,k+
+ Q
m=1
E
k+ ,j
= 0
0, otherwise
where we derive matrix E with size n n from E, so that
E
i,j
= 1, if E
i,j
=
, otherwise E
i,j
= 0. In other words,
E
i,j
= 1, if v
i
cannot send a message directly to v
j
, otherwise
E
i,j
= 0.
We will show that ABEM is not secure in our model
for security objective function
F
ABEM
. In particular, we
present a configuration conf and an adversary
A, for which
there doesn't exist any ideal-world adversary
A , such
that Out
real,F
ABEM
conf ,A
is statistically indistinguishable from
Out
ideal,F
ABEM
conf ,A
. Equivalently, we show that for a real-world
adversary
A, F
ABEM
(conf , T ) = 0 with a probability that
is a non-negligible function of in the real-world model,
while
F
ABEM
(conf , T ) = 0 with probability zero for every
ideal-world adversary
A in the ideal-world model, where
T describes the routing topology in the ideal-world model.
Moreover, the success probability of the real-world adversary
A described below is independent from .
v
0
, B
v
1
, X
v
2
, Y
v
1
= v
3
v
0
, B
v
1
, X
v
2
, Y
v
1
= v
3
Figure 3: A simple attack against ABEM. v
0
, v
1
,
and v
2
are honest nodes with identifiers
L(v
0
) = B,
L(v
1
) = X, and L(v
2
) = Y , whereas v
1
is an adversarial
node.
E
1,0
, E
3,0
, E
2,3
are finite values, and E
3,1
=
E
2,0
= E
2,1
=
. Links are assumed to be symmetric,
i.e., E
i,j
= E
j,i
. The configuration is illustrated on
the left-hand side, where a dashed line denotes a
direct link. In the routing topology of the real-world
model, on the right-hand side, v
2
sets X as its parent
identifier, however, E
2,1
=
and E
3,1
=
.
The configuration conf and the result of the attack is
depicted on Figure 3.
We assume that the base station
broadcasts only a single beacon during the computational
process, i.e., only a single beaconing interval is analyzed in
our model. At the beginning, the base station B floods the
network by a beacon
B
:
msg
1
= (BEACON, rnd, B, sig
B
)
Both adversarial node v
1
and honest node X receive this
beacon, and X sets B as its parent, since the verification
of the signature is successful. X modifies the beacon by replacing
sender identifier B to X, and broadcasts the resulted
beacon:
X
:
msg
2
= (BEACON, rnd, X, sig
B
)
In parallel, v
1
modifies the beacon by replacing sender
identifier B to X, and broadcasts the resulted beacon:
v
1
: msg
2
= (BEACON, rnd, X, sig
B
)
Upon the reception of msg
2
, node Y sets X as its parent,
since sig
B
is correct.
In the real-world model, these actions result T
2,1
= 1,
which implies that
F
ABEM
(conf , T ) = 0. On the contrary,
F
ABEM
(conf , T ) never equals to 0, where T represents the
routing topology in the ideal-world model. Let us assume
that
F
ABEM
(conf , T ) = 0, which means that T
1,2
= 1 or
T
2,1
= 1. T
1,2
= 1 is only possible, if X receives
msg
3
= (BEACON, rnd, Y, sig
B
)
However, it yields contradiction, since E
3,1
= E
2,1
=
,
and B never broadcasts msg
3
. Similarly, if T
2,1
= 1 then
Y must receive msg
2
, which means that v
1
must broadcast
msg
2
. Conversely, B never broadcasts msg
2
, and E
3,1
=
.
Therefore, v
1
can only broadcast msg
2
, if he successfully
modifies msg
1
or forges msg
2
. However, it also contradicts
our assumption that the ideal-world adversary cannot modify
and inject messages in the ideal-world model.
RELATED WORK
In [10], the authors map some adversary capabilities and
some feasible attacks against routing in wireless sensor networks
, and they define routing security implicitly as resistance
to (some of) these attacks.
Hence, the security of
sensor routing is only defined informally, and the countermeasures
are only related to specific attacks. In this way, we
even cannot compare the sensor routing protocols in terms of
security. Another problem with this approach is the lack of a
formal model, where the security of sensor routing can be described
in a precise and rigorous way. While secure messaging
and key-exchange protocols are classical and well-studied
problems in traditional networks [3, 15], formal modelling of
secure routing in sensor networks has not been considered
so far.
The adversarial nodes are also classified into the
groups of sensor-class and laptop-class nodes in [10], but
the capabilities of an adversarial node regarding message
manipulations are not discussed.
The simulation paradigm is described in [15, 5]. These
models were mainly proposed with wired networks in mind
typically implemented on the well-known Internet architecture
, and the wireless context is not focused there. In our
opinion, the multi-hop nature of communications is an inherent
characteristic of wireless sensor networks, therefore,
it should be explicitly modelled. In more particular, the
broadcast nature of communication enables a party to overhear
the transmission of a message that was not destined to
him, however, this transmission can be received only in a
certain range of the sender. The size of this range is determined
by the power at which the sender sent the message.
Another deviation from [15] is the usage of the security objective
function in the definition of security. In [15], the
56
indistinguishability is defined on the view of the honest parties
(on their input, states, and output) in the ideal-world
and in the real-world models. However, an adversary can
distort the states of the honest parties in unavoidable ways,
and hence, the classical definition would be too strong and
no routing protocol would satisfy it. On the other hand,
our model is compliant with [15] considering high-level connections
between nodes. In [15], the standard cryptographic
system allows us to define each high-level connection as secure
(private and authentic), authenticated (only authentic
), and insecure (neither private nor authentic). In this
taxonomy, the communication channel between two honest
nodes can be either insecure or secure in our model. If an
adversarial node is placed in the communication range of
one of the communicating nodes, then it is considered to
be an insecure channel. If the adversary can reach none of
the communicating nodes, the channel between that nodes
is hidden from the adversary, and thus, it is considered to
be secure.
Although some prior works [18, 12] also used formal techniques
to model the security of multi-hop routing protocols
, these ones were mainly proposed for ad hoc routing.
Moreover, the model proposed in [12] is based on CPAL-ES,
and the model in [18] is similar to the strand spaces model.
Both of these formal techniques differ from the simulation
paradigm.
Our work is primarily based on [4, 1]. Here, the authors
also use the simulation paradigm to prove the security of
routing protocols in wireless ad-hoc networks. However, our
model differs from the models in [4, 1] in two ways:
Adversary model: The adversary in [4] and [1] is assumed
to have the same resources and communication
capabilities as an ordinary node in the network.
Therefore, that adversary model deviates from the so-called
Dolev-Yao model in [6]. In our work, the adversary
also uses wireless devices to attack the systems,
and it is reasonable to assume that the adversary can
interfere with communications only within its power
range. The adversarial nodes belonging to the sensor-class
nodes has the same resources and communication
capabilities as an ordinary sensor node, but a more resourced
adversarial node (e.g., laptops) may affect the
overall communication of an entire part of the network
depending on the power range of the resourced adversarial
device. That resourced devices also make the
adversary able to perform more sophisticated message
manipulations.
Modelling security objectives: In ad hoc networks,
nodes construct routes between a source and a destination
[13, 8], whereas sensor nodes should build a complete
routing topology for the entire network. In case
of sensor networks, the only destination for all nodes is
the base station [9]. In addition, sensor nodes are resource
constrained, which implies that we also need to
model the energy consumption of sensor nodes, since
several attacks impacts the network lifetime. These
differences from ad hoc networks has yielded a wide
range of sensor applications, and thus, sensor routing
protocols [9] are much diverse than ad hoc routing
protocols.
Hence, the security objectives cannot be
modelled uniformly for sensor routing protocols.
CONCLUSION
In this paper, we proposed a formal security model for
routing protocols in wireless sensor networks. Our model is
based on the well-known simulation paradigm, but it differs
from previously proposed models in several important aspects
. First of all, the adversary model is carefully adopted
to the specific characteristics of wireless sensor networks. In
our model, the adversary is not all-powerful, but it can only
interfere with communications within its own radio range. A
second important contribution is that we defined the output
of the dynamic models that represent the ideal and the real
operations of the system as a suitable function of the routing
state of the honest nodes, instead of just using the routing
state itself as the output. We expect that this will allow
us to model different types of routing protocols in a common
framework. In addition, this approach hides the unavoidable
distortions caused by the adversary in the routing
state, and in this way, it makes our definition of routing security
satisfiable. As an illustrative example, we considered
an authenticated version of the TinyOS beaconing routing
protocol, and we showed how an attack against this protocol
can be represented in our formal model.
As we mentioned in the Introduction, this paper is a workin
-progress paper.
In particular, we have presented neither
a new secure routing protocol designed with the help
of our formal model, nor a detailed security proof carried
out within our model. These are left for future study. We
must note, however, that the generality of the simulation
paradigm and the fact that we could represent a known
attack against the authenticated TinyOS protocol in our
model make us confident that we are on the right track.
ACKNOWLEDGEMENTS
The
work
described
in
this
paper
is
based
on
results
of
IST
FP6
STREP
UbiSec&Sens
(
http://www.ist-ubisecsens.org).
UbiSec&Sens
receives
research funding from the European Community's
Sixth Framework Programme. Apart from this, the European
Commission has no responsibility for the content of
this paper. The information in this document is provided
as is and no guarantee or warranty is given that the
information is fit for any particular purpose.
The user
thereof uses the information at its sole risk and liability.
The work presented in this paper has also been partially
supported by the Hungarian Scientific Research Fund (contract
number T046664).
The first author has been further
supported by the HSN Lab. The second author has
been supported by the Hungarian Ministry of Education
(B
O2003/70).
REFERENCES
[1] G.
Acs, L. Butty
an, and I. Vajda. Provable Security of
On-Demand Distance Vector Routing in Wireless Ad
Hoc Networks. In Proceedings of the Second European
Workshop on Security and Privacy in Ad Hoc and
Sensor Networks (ESAS 2005), July 2005.
[2] G.
Acs, L. Butty
an, and I. Vajda. Provably Secure
On-demand Source Routing in Mobile Ad Hoc
Networks. To appear in IEEE Transactions on Mobile
Computing.
[3] M. Bellare, R. Canetti, and H. Krawczyk. A modular
approach to the design and analysis of authentication
57
and key exchange protocols. In Proceedings of the
ACM Symposium on the Theory of Computing, 1998.
[4] L. Butty
an and I. Vajda. Towards provable security
for ad hoc routing protocols. In Proceedings of the
ACM Workshop on Security in Ad Hoc and Sensor
Networks (SASN), October 2004.
[5] R. Canetti. Universally composable security: A new
paradigm for cryptographic protocols. In Proceedings
of the 42nd IEEE Symposium on Foundations of
Computer Science (FOCS), 2001.
[6] D. Dolev and A. C. Yao. On the Security of Public
Key Protocols. In IEEE Transactions on Information
Theory 29 (2), 1983.
[7] IEEE Standard for Information
technology--Telecommunications and information
exchange between systems--Local and metropolitan
area networks--Specific requirements. Part 15.4:
Wireless Medium Access Control (MAC) and Physical
Layer (PHY) Specifications for Low-Rate Wireless
Personal Area Networks (LR-WPANs), 2003.
[8] D. Johnson and D. Maltz. Dynamic source routing in
ad hoc wireless networks. In Mobile Computing, edited
by Tomasz Imielinski and Hank Korth, Chapter 5,
pages 153-181. Kluwer Academic Publisher, 1996.
[9] J. N. Al-Karaki and A. E. Kamal. Routing techniques
in wireless sensor networks: a survey. In IEEE
Wireless Communications, Volume 11, pp. 6-28, 2004.
[10] C. Karlof, D. Wagner. Secure routing in wireless
sensor networks: attacks and countermeasures. In Ad
Hoc Networks, Volume 1, 2003.
[11] Q. Li, J. Aslam, and D. Rus. Hierarchical
Power-aware Routing in Sensor Networks. In
Proceedings of the DIMACS Workshop on Pervasive
Networking, May, 2001.
[12] J. Marshall. An Analysis of the Secure Routing
Protocol for mobile ad hoc network route discovery:
using intuitive reasoning and formal verification to
identify flaws. MSc thesis, Department of Computer
Science, Florida State University, April 2003.
[13] C. Perkins and E. Royer. Ad hoc on-demand distance
vector routing. In Proceedings of the IEEE Workshop
on Mobile Computing Systems and Applications, pp.
90-100, February 1999.
[14] A. Perrig, R. Szewczyk, V. Wen, D. Culler, J. D.
Tygar. SPINS: Security Protocols for Sensor
Networks. In Wireless Networks Journal (WINE),
Volume 8, September 2002.
[15] B. Pfitzman and M. Waidner. A model for
asynchronous reactive systems and its application to
secure message transmission. In Proceedings of the
22nd IEEE Symposium on Security & Privacy, 2001.
[16] S. Singh, M. Woo, and C. Raghavendra. Power-Aware
Routing in Mobile Ad Hoc Networks. In Proceedings
of the Fourth Annual ACM/IEEE International
Conference on Mobile Computing and Networking
(MobiCom '98), Oct. 1998.
[17] W. Xu, W. Trappe, Y. Zhang and T. Wood. The
Feasibility of Launching and Detecting Jamming
Attacks in Wireless Networks. In Proceedings of the
6th ACM International Symposium on Mobile Ad Hoc
Networking and Computing (MobiHoc'05), May 2005.
[18] S. Yang and J. Baras. Modeling vulnerabilities of ad
hoc routing protocols. In Proceedings of the ACM
Workshop on Security of Ad Hoc and Sensor
Networks, October 2003.
58
| Simulatability;Adversary Model;Routing Protocols;Sensor Networks;Provable Security |
142 | Obfuscated Databases and Group Privacy | We investigate whether it is possible to encrypt a database and then give it away in such a form that users can still access it, but only in a restricted way. In contrast to conventional privacy mechanisms that aim to prevent any access to individual records, we aim to restrict the set of queries that can be feasibly evaluated on the encrypted database. We start with a simple form of database obfuscation which makes database records indistinguishable from lookup functions . The only feasible operation on an obfuscated record is to look up some attribute Y by supplying the value of another attribute X that appears in the same record (i.e., someone who does not know X cannot feasibly retrieve Y ). We then (i) generalize our construction to conjunctions of equality tests on any attributes of the database, and (ii) achieve a new property we call group privacy. This property ensures that it is easy to retrieve individual records or small subsets of records from the encrypted database by identifying them precisely, but "mass harvesting" queries matching a large number of records are computationally infeasible. Our constructions are non-interactive. The database is transformed in such a way that all queries except those ex-plicitly allowed by the privacy policy become computationally infeasible, i.e., our solutions do not rely on any access-control software or hardware. | INTRODUCTION
Conventional privacy mechanisms usually provide all-or-nothing
privacy. For example, secure multi-party computation
schemes enable two or more parties to compute some
joint function while revealing no information about their respective
inputs except what is leaked by the result of the
computation. Privacy-preserving data mining aims to com-pletely
hide individual records while computing global statistical
properties of the database. Search on encrypted data
and private information retrieval enable the user to retrieve
data from an untrusted server without revealing the query.
In this paper, we investigate a different concept of privacy.
Consider a data owner who wants to distribute a database to
potential users. Instead of hiding individual data entries, he
wants to obfuscate the database so that only certain queries
can be evaluated on it, i.e., the goal is to ensure that the
database, after it has been given out to users, can be accessed
only in the ways permitted by the privacy policy.
Note that there is no interaction between the data owner and
the user when the latter accesses the obfuscated database.
Our constructions show how to obfuscate the database
before distributing it to users so that only the queries permitted
by the policy are computationally feasible. This concept
of privacy is incomparable to conventional definitions
because, depending on the policy, a permitted query may or
even should reveal individual data entries.
For example, a college alumni directory may be obfuscated
in such a way that someone who already knows a person's
name and year of graduation is able to look up that person's
email address, yet spammers cannot indiscriminately
harvest addresses listed in the directory. Employees of a
credit bureau need to have access to customers' records so
that they can respond to reports of fraudulent transactions,
yet one may want to restrict the bureau's ability to compile
a list of customers' addresses and sell it to a third party.
We develop provably secure obfuscation techniques for
several types of queries. We do not assume that users of the
obfuscated database access it through a trusted third party,
nor that they use trusted or "tamper-proof" access-control
software or hardware (in practice, such schemes are vulnerable
to circumvention and reverse-engineering, while trusted
third parties are scarce and often impractical). Our constructions
are cryptographically strong, i.e., they assume an
adversary who is limited only by his computational power.
We prove security in the standard "virtual black-box"
model for obfuscation proposed by Barak et al. [2]. Intuitively
, a database is securely obfuscated if the view of any
efficient adversary with access to the obfuscation can be efficiently
simulated by a simulator who has access only to
the ideal functionality, which is secure by definition. The
ideal functionality can be thought of as the desired privacy
policy for the database. One of our contributions is coming
up with several ideal functionalities that capture interesting
privacy policies for databases.
102
Directed-access databases.
Our "warm-up" construction
is a directed-access database. Some attributes of the
database are designated as query attributes, and the rest
as data attributes. The database is securely obfuscated if,
for any record, it is infeasible to retrieve the values of the
data attributes without supplying the values of the query
attributes, yet a user who knows the query attributes can
easily retrieve the corresponding data attributes.
To illustrate by example, a directed-access obfuscation of
a telephone directory has the property that it is easy to
look up the phone number corresponding to a particular
name-address pair, but queries such as "retrieve all phone
numbers stored in the directory" or "retrieve all names"
are computationally infeasible. Such a directory is secure
against abusive harvesting, but still provides useful functionality
. Note that it may be possible to efficiently enumerate
all name-address pairs because these fields have less
entropy than regular cryptographic keys, and thus learn the
entire database through the permitted queries. Because the
database is accessed only in permitted ways, this does not
violate the standard definition of obfuscation. Below, we
give some examples where it is not feasible to enumerate all
possible values for query attributes.
The directed-access property of a single database record
can be modeled as a point function, i.e., a function from
{0, 1}
n
to {0, 1} that returns 1 on exactly one input x (in
our case, query attributes are the arguments of the point
function). Directed-access obfuscation guarantees that the
adversary's view of any obfuscated record can be efficiently
simulated with access only to this point function. Therefore
, for this "warm-up" problem, we can use obfuscation
techniques for point functions such as [22]. Informally, we
encrypt the data attributes with a key derived from hashed
query attributes. The only computationally feasible way to
retrieve the data attributes is to supply the corresponding
query attributes. If the retriever does not know the right
query attributes, no information can be extracted at all.
Group-exponential databases.
We then consider a more
interesting privacy policy, which requires that computational
cost of access be exponential in the number of database
records retrieved. We refer to this new concept of privacy
as group privacy. It ensures that users of the obfuscated
database can retrieve individual records or small subsets of
records by identifying them precisely, i.e., by submitting
queries which are satisfied only by these records. Queries
matching a large number of records are infeasible.
We generalize the idea of directed access to queries consisting
of conjunctions of equality tests on query attributes,
and then to any boolean circuit over attribute equalities.
The user can evaluate any query of the form attribute
j
1
=
value
1
. . .attribute
j
t
= value
t
, as long as it is satisfied by
a small number of records. Our construction is significantly
more general than simple keyword search on encrypted data
because the value of any query attribute or a conjunction
thereof can be used as the "keyword" for searching the obfuscated
database, and the obfuscator does not need to know
what queries will be evaluated on the database.
To distinguish between "small" and "large" queries, suppose
the query predicate is satisfied by n records.
Our
construction uses a form of secret sharing that forces the
retriever to guess n bits before he can access the data attributes
in any matching record. (If n=1, i.e., the record is
unique, the retriever still has to guess 1 bit, but this simply
means that with probability
1
2
he has to repeat the query.)
The policy that requires the retriever to uniquely identify
a single record, i.e., forbids any query that is satisfied by
multiple records, can also be easily implemented using our
techniques. Our construction can be viewed as the noninteractive
analog of hash-reversal "client puzzles" used to
prevent denial of service in network security [21], but, unlike
client puzzles, it comes with a rigorous proof of security.
For example, consider an airline passenger database in
which every record contains the passenger's name, flight
number, date, and ticket purchase details. In our construction
, if the retriever knows the name and date that uniquely
identify a particular record (e.g., because this information
was supplied in a court-issued warrant), he (almost) immediately
learns the key that encrypts the purchase details in
the obfuscated record. If the passenger traveled on k flights
on that date, the retriever learns the key except for k bits.
Since k is small, guessing k bits is still feasible. If, however,
the retriever only knows the date and the flight number, he
learns the key except for m bits, where m is the number of
passengers on the flight, and retrieval of these passengers'
purchase details is infeasible.
A database obfuscated using our method has the group
privacy property in the following sense. It can be accessed
only via queries permitted by the privacy policy. The probability
of successfully evaluating a permitted query is inversely
exponential in the number of records that satisfy the
query predicate. In particular, to extract a large number of
records from the database, the retriever must know a pri-ori
specific information that uniquely identifies each of the
records, or small subsets thereof. The obfuscated database
itself does not help him obtain this information.
In obfuscated databases with group privacy, computational
cost of access depends on the amount of information
retrieved. Therefore, group privacy can be thought of as a
step towards a formal cryptographic model for "economics
of privacy." It is complementary to the existing concepts of
privacy, and appears to be a good fit for applications such
as public directories and customer relationship management
(CRM) databases, where the database user may need to access
an individual record for a legitimate business purpose,
but should be prevented from extracting large subsets of
records for resale and abusive marketing.
While our constructions for group privacy are provably
secure in the "virtual black-box" sense of [2], the cost of
this rigorous security is a quadratic blowup in the size of the
obfuscated database, rendering the technique impractical for
large datasets. We also present some heuristic techniques to
decrease the size of the obfuscated database, and believe
that further progress in this area is possible.
Alternative privacy policies.
Defining rigorous privacy
policies that capture intuitive "database privacy" is an important
challenge, and we hope that this work will serve as
a starting point in the discussion. For example, the group
privacy policy that we use in our constructions permits the
retriever to learn whether a given attribute of a database
record is equal to a particular value. While this leaks more
information than may be desirable, we conjecture that the
privacy policy without this oracle is unrealizable.
We also consider privacy policies that permit any query
rather than just boolean circuits of equality tests on attributes
. We show that this policy is vacuous: regardless
of the database contents, any user can efficiently extract
103
the entire database by policy-compliant queries. Therefore,
even if the obfuscation satisfies the virtual black-box property
, it serves no useful purpose. Of course, there are many
types of queries that are more general than boolean circuits
of equality tests on attributes. Exact characterization of
non-vacuous, yet realizable privacy policies is a challenging
task, and a topic of future research.
Organization of the paper.
We discuss related work in
section 2. The ideas are illustrated with a "warm-up" construction
in section 3. In section 4, we explain group privacy
and the corresponding obfuscation technique. In section 5,
we generalize the class of queries to boolean circuits over attribute
equalities. In section 6, we show that policies which
permit arbitrary queries are vacuous, and give an informal
argument that a policy that does not allow the retriever to
verify his guesses of individual attribute values cannot be realized
. Conclusions are in section 7. All proofs will appear
in the full version of the paper.
RELATED WORK
This paper uses the "virtual black-box" model of obfuscation
due to Barak et al. [2]. In addition to the impossibility
result for general-purpose obfuscation, [2] demonstrates several
classes of circuits that cannot be obfuscated. We focus
on a different class of circuits.
To the best of our knowledge, the first provably secure
constructions for "virtual black-box" obfuscation were proposed
by Canetti et el. [5, 6] in the context of "perfectly
one-way" hash functions, which can be viewed as obfuscators
for point functions (a.k.a. oracle indicators or delta
functions). Dodis and Smith [15] recently showed how to
construct noise-tolerant "perfectly one-way" hash functions.
which they used to obfuscate proximity queries with "en-tropic
security." It is not clear how to apply techniques
of [15] in our setting. In section 6, we present strong evidence
that our privacy definitions may not be realizable if
queries other than equality tests are permitted.
Lynn et al. [22] construct obfuscators for point functions
(and simple extensions, such as public regular expressions
with point functions as symbols) in the random oracle model.
The main advantage of [22] is that it allows the adversary
partial information about the preimage of the hash function,
i.e., secrets do not need to have high entropy. This feature
is essential in our constructions, too, thus we also use the
random oracle model. Wee [27] proposed a construction for
a weaker notion of point function obfuscation, along with
the impossibility result for uniformly black-box obfuscation.
This impossibility result suggests that the use of random
oracles in our proofs (in particular, the simulator's ability
to choose the random oracle) is essential.
Many ad-hoc obfuscation schemes have been proposed in
the literature [1, 10, 9, 12, 13, 11]. Typically, these schemes
contain neither a cryptographic definition of security, nor
proofs, except for theoretical work on software protection
with hardware restrictions on the adversary [19, 20].
Forcing the adversary to pay some computational cost for
accessing a resource is a well-known technique for preventing
malicious resource exhaustion (a.k.a. denial of service
attacks). This approach, usually in the form of presenting
a puzzle to the adversary and forcing him to solve it, has
been proposed for combating junk email [16], website metering
[17], prevention of TCP SYN flooding attacks [21],
protecting Web protocols [14], and many other applications.
Puzzles based on hash reversal, where the adversary must
discover the preimage of a given hash value where he already
knows some of the bits, are an especially popular technique
[21, 14, 26], albeit without any proof of security. Our
techniques are similar, but our task is substantially harder
in the context of non-interactive obfuscation.
The obfuscation problem is superficially similar to the
problem of private information retrieval [3, 8, 18] and keyword
search on encrypted data [25, 4]. These techniques are
concerned, however, with retrieving data from an untrusted
server, whereas we are concerned with encrypting the data
and then giving them away, while preserving some control
over what users can do with them.
A recent paper by Chawla et al. [7] also considers database
privacy in a non-interactive setting, but their objective is
complementary to ours. Their definitions aim to capture privacy
of data, while ours aim to make access to the database
indistinguishable from access to a certain ideal functionality.
DIRECTED-ACCESS DATABASES
As a warm-up example, we show how to construct directed-access
databases in which every record is indistinguishable
from a lookup function. The constructions and theorems in
this section are mainly intended to illustrate the ideas.
Let X be a set of tuples
x
, Y a set of tuples
y
, and
Y
= Y {}. Let D X Y be the database. We want to
obfuscate each record of D so that the only operation that
a user can perform on it is to retrieve
y
if he knows
x
.
We use the standard approach in secure multi-party computation
, and formally define this privacy policy in terms of
an ideal functionality. The ideal functionality is an (imaginary
) trusted third party that permits only policy-compliant
database accesses. An obfuscation algorithm is secure if any
access to the obfuscated database can be efficiently simulated
with access only to the ideal functionality. This means
that the user can extract no more information from the obfuscated
database than he would be able to extract had all
of his accesses been filtered by the trusted third party.
Definition 1. (Directed-access privacy policy) For
database D, define the corresponding directed-access functionality
DA
D
as the function that, for any input
x
X
such that
x ,
y
1
, . . . ,
x ,
y
m
D, outputs {
y
1
, . . . ,
y
m
}.
Intuitively, a directed-access database is indistinguishable
from a lookup function. Given the query attributes of an
individual record (
x
), it is easy to learn the data attributes
(
y
), but the database cannot be feasibly accessed in any
other way. In particular, it is not feasible to discover the
value of
y
without first discovering a corresponding
x
.
Moreover, it is not feasible to harvest all
y
values from
the database without first discovering all values of
x
.
This definition does not say that, if set X is small, it is
infeasible to efficiently enumerate all possible values of
x
and stage a dictionary attack on the obfuscated database.
It does guarantee that even for this attack, the attacker is
unable to evaluate any query forbidden by the privacy policy.
In applications where X cannot be efficiently enumerated
(e.g., X is a set of secret keywords known only to some
users of the obfuscated database), nothing can be retrieved
from the obfuscated database by users who don't know the
keywords. Observe that
x
can contain multiple attributes,
104
and thus multiple keywords may be required for access to
y
in the obfuscated database.
Directed-access databases are easy to construct in the random
oracle model, since lookup functionality is essentially
a point function on query attributes, and random oracles
naturally provide an obfuscation for point functions [22].
The obfuscation algorithm OB
da
takes D and replaces every
record
x
i
,
y
i
D with
hash
(r
i
1
||
x
i
), hash(r
i
2
||
x
i
)
y
i
, r
i
1
, r
i
2
where r
i
1,2
are random numbers, || is concatenation, and
hash
is a hash function implementing the random oracle.
Theorem 1. (Directed-access obfuscation is "virtual
black-box") Let OB
da
be as described above. For any
probabilistic polynomial-time adversarial algorithm A, there
exists a probabilistic polynomial-time simulator algorithm S
and a negligible function of the security parameter k such
that for any database D:
|P(A(OB
da
(D)) = 1) - P(S
DA
D
(1
|D|
) = 1)| (k)
where probability P is taken over random oracles (implemented
as hash functions), as well as the the randomness of
A
and S. Intuitively, this theorem holds because retrieving
y
i
requires finding the (partial) pre-image of hash(r
i
2
,
x
i
).
The standard definition of obfuscation in [2] also requires
that there exist an efficient retrieval algorithm that, given
some
x
, extracts the corresponding
y
from the obfuscation
OB
da
(D). Clearly, our construction has this property
. Someone who knows
x
simply finds the record(s)
in which the first value is equal to hash(r
i
1
||
x
), computes
hash
(r
i
2
||
x
) and uses it as the key to decrypt
y
.
GROUP-EXPONENTIAL DATABASES
For the purposes of this section, we restrict our attention
to queries P that are conjunctions of equality tests over
attributes (in section 5, we show how this extends to arbitrary
boolean circuits over equality tests). For this class of
queries, we show how to obfuscate the database so that evaluation
of the query is exponential in the size of the answer
to the query. Intuitively, this means that only precise query
predicates, i.e., those that are satisfied by a small number
of records, can be efficiently computed. "Mass harvesting"
queries, i.e., predicates that are satisfied by a large number
of records, are computationally infeasible.
Recall that our goal is to restrict how the database can
be accessed. For some databases, it may be possible to efficiently
enumerate all possible combinations of query attributes
and learn the entire database by querying it on every
combination. For databases where the values of query
attributes are drawn from a large set, our construction prevents
the retriever from extracting any records that he cannot
describe precisely. In either case, we guarantee that the
database can be accessed only through the interface permitted
by the privacy policy, without any trust assumptions
about the retriever's computing environment.
In our construction, each data attribute is encrypted with
a key derived from a randomly generated secret. We use a
different secret for each record. The secret itself is split into
several (unequal) shares, one per each query attribute. Each
share is then encrypted itself, using a key derived from the
output of the hash function on the value of the corresponding
query attribute. If the retriever knows the correct value only
for some query attribute a, he must guess the missing shares.
The size of the revealed share in bits is inversely related to
the number of other records in the database that have the
same value of attribute a. This provides protection against
queries on frequently occurring attribute values.
4.1
Group privacy policy
We define the privacy policy in terms of an ideal functionality
, which consists of two parts. When given an index
of a particular query attribute and a candidate value, it responds
whether the guess is correct, i.e., whether this value
indeed appears in the corresponding attribute of the original
database. When given a predicate, it evaluates this predicate
on every record in the database. For each record on
which the predicate is true, it returns this record's data attributes
with probability 2
-q
, where q is the total number
of records in the database that satisfy the predicate. if no
more information can be extracted this ideal functionality.
Definition 2. (Group privacy policy) Let X be a set
and
Y a set of tuples. Let D be the database
1
,
2
, . . .
N
where
i
= {x
i
1
, x
i
2
, . . . , x
im
,
y
i
} X
m
Y. Let P : X
m
{0, 1} be a predicate of the form X
j
1
= x
j
1
X
j
2
= x
j
2
. . .
X
j
t
= x
j
t
. Let D
[P]
= {
i
D | P(x
i
1
, x
i
2
, . . . , x
im
) = 1}
be the subset of records on which
P is true.
The group-exponential functionality GP
D
consists of two
functions:
- C
D
(x, i, j) is 1 if x = x
ij
and 0 otherwise, where 1 i
N,
1 j m.
- R
D
(P) =
1iN
{ i,
i
}, where
i
=
y
i
with probability 2
-|D
[P]
|
if
P(
i
)
with probability 1 - 2
-|D
[P]
|
if
P(
i
)
if
P(
i
)
Probability is taken over the internal coin tosses of
GP
D
.
Informally, function C answers whether the jth attribute of
the ith record is equal to x, while function R returns all
records that satisfy some predicate P, but only with probability
inversely exponential in the number of such records.
It may appear that function C is unnecessary. Moreover,
it leaks additional information, making our privacy policy
weaker than it might have been. In section 6, we argue
that it cannot be simply eliminated, because the resulting
functionality would be unrealizable. Of course, there may
exist policies that permit some function C
which leaks less
information than C, but it is unclear what C
might be. We
discuss several alternatives to our definition in section 6.
We note that data attributes are drawn from a set of tuples
Y because there may be multiple data attributes that
need to be obfuscated. Also observe that we have no restrictions
on the values of query attributes, i.e., the same
m
-tuple of query attributes may appear in multiple records,
with different or identical data attributes.
4.2
Obfuscating the database
We now present the algorithm OB
gp
, which, given any
database D, produces its obfuscation. For notational convenience
, we use a set of random hash functions H
: {0, 1}
{0, 1}
k
. Given any hash function H, these can be implemented
simply as H(||x). To convert the k-bit hash function
output into a key as long as the plaintext to which it is
105
applied, we use a set of pseudo-random number generators
prg
,
: {0, 1}
k
{0, 1}
(this implements random oracles
with unbounded output length).
Let N be the number of records in the database. For
each row
i
, 1 i N , generate a random N -bit secret
r
i
= r
i
1
||r
i
2
|| . . . ||r
iN
, where r
ij
R
{0, 1}. This secret will
be used to protect the data attribute
y
i
of this record. Note
that there is 1 bit in r
i
for each record of the database.
Next, split r
i
into m shares corresponding to query attributes
. If the retriever can supply the correct value of
the jth attribute, he will learn the jth share (1 j m).
Denote the share corresponding to the jth attribute as s
ij
.
Shares are also N bits long, i.e., s
ij
= s
ij
1
|| . . . ||s
ijN
.
Each of the N bits of s
ij
has a corresponding bit in r
i
,
which in its turn corresponds to one of the N records in the
database. For each p s.t. 1 p N , set s
ijp
= r
ip
if x
ij
=
x
pj
, and set s
ijp
= 0 otherwise. In other words, the jth
share s
ij
consists of all bits of r
i
except those corresponding
to the records where the value of the jth attribute is the
same. An example can be found in section 4.4.
The result of this construction is that shares corresponding
to commonly occurring attribute values will be missing
many bits of r
i
, while a share corresponding to an attribute
that uniquely identifies one record will contain all bits of
r
i
except one. Intuitively, this guarantees group privacy. If
the retriever can supply query attribute values that uniquely
identify a single record or a small subset of records, he will
learn the shares that reveal all bits of the secret r
i
except
for a few, which he can easily guess. If the retriever cannot
describe precisely what he is looking for and supplies
attribute values that are common in the database, many of
the bits of r
i
will be missing in the shares that he learns,
and guessing all of them will be computationally infeasible.
Shares corresponding to different query attributes may
overlap. For example, suppose that we are obfuscating a
database in which two records have the same value of attribute
X
1
if and only if they have the same value of attribute
X
2
. In this case, for any record in the database, the
share revealed if the retriever supplies the correct value of
X
1
will be exactly the same as the share revealed if the retriever
supplies the value of X
2
. The retriver gains nothing
by supplying X
2
in conjunction with X
1
because this does
not help him narrow the set of records that match his query.
To construct the obfuscated database, we encrypt each
share with a pseudo-random key derived from the value of
the corresponding query attribute, and encrypt the data attribute
with a key derived from the secret r
i
. More precisely,
we replace each record
i
= x
i
1
, . . . , x
im
,
y
i
of the original
database with the obfuscated record
v
i
1
, w
i
1
, v
i
2
, w
i
2
, . . . , v
im
, w
im
, u
i
, z
i
where
- v
ij
= H
1,i,j
(x
ij
). This enables the retriever to verify that
he supplied the correct value for the jth query attribute.
- w
ij
= prg
1,i,j
(H
2,i,j
(x
ij
)) s
ij
. This is the jth share of
the secret r
i
, encrypted with a key derived from the value
of the jth query attribute.
- u
i
= H
3,i
(r
i
). This enables the retriever to verify that he
computed the correct secret r
i
.
- z
i
= prg
2,i
(H
4,i
(r
i
))
y
i
. This is the data attribute
y
i
,
encrypted with a key derived from the secret r
i
.
Clearly, algorithm OB
gp
runs in time polynomial in N
(the size of the database). The size of the resulting obfuscation
is N
2
m
. Even though it is within a polynomial factor of
N
(and thus satisfies the definition of [2]), quadratic blowup
means that our technique is impractical for large databases.
This issue is discussed further in section 4.5.
We claim that OB
gp
produces a secure obfuscation of D,
i.e., it is not feasible to extract any more information from
OB
gp
(D) than permitted by the privacy policy GP
D
.
Theorem 2. (Group-exponential obfuscation is
"virtual black-box") For any probabilistic polynomial-time
(adversarial) algorithm A, there exists a probabilistic
polynomial-time simulator algorithm S and a negligible function
of the security parameter k s.t. for any database D:
|P(A(OB
gp
(D)) = 1) - P(S
GP
D
(1
|D|
) = 1)| (k)
Remark.
An improper implementation of the random oracles
in the above construction could violate privacy under
composition of obfuscation, i.e., when more than one
database is obfuscated and published. For instance, if the
hash of some attribute is the same in two databases, then
the adversary learns that the attributes are equal without
having to guess their value. To prevent this, the same hash
function may not be used more than once. One way to
achieve this is to pick H
i
(.) = H(r
i
||.) where r
i
R
{0, 1}
k
,
and publish r
i
along with the obfuscation. This is an example
of the pitfalls inherent in the random oracle model.
4.3
Accessing the obfuscated database
We now explain how the retriever can efficiently evaluate
queries on the obfuscated database. Recall that the privacy
policy restricts the retriever to queries consisting of conjunctions
of equality tests on query attributes, i.e., every query
predicate P has the form X
j
1
= x
j
1
. . . X
j
t
= x
j
t
, where
j
1
, . . . , j
t
are some indices between 1 and m.
The retriever processes the obfuscated database record by
record. The ith record of the obfuscated database (1 i
N
) has the form v
i
1
, w
i
1
, v
i
2
, w
i
2
, . . . , v
im
, w
im
, u
i
, z
i
. The
retriever's goal is to compute the N -bit secret r
i
so that he
can decrypt the ciphertext z
i
and recover the value of
y
i
.
First, the retriever recovers as many shares as he can from
the ith record. Recall from the construction of section 4.2
that each w
ij
is a ciphertext of some share, but the only way
to decrypt it is to supply the corresponding query attribute
value x
ij
. Let range over the indices of attributes supplied
by the retriever as part of the query, i.e., {j
1
, . . . , j
t
}.
For each , if H
1,i,
(x
) = v
i
, then the retriever extracts
the corresponding share s
i
= prg
1,i,
(H
2,i,
(x
)) w
i
. If
H
1,i,
(x
) = v
i
, this means that the retriever supplied the
wrong attribute value, and he learns nothing about the corresponding
share. Let S be the set of recovered shares.
Each recovered share s
S reveals only some bits of r
i
,
and, as mentioned before, bits revealed by different shares
may overlap. For each p s.t. 1 p N , the retriever sets the
corresponding bit r
ip
of the candidate secret r
i
as follows:
r
ip
=
s
p
if s
S s.t. v
p
= H
1,1,
(x
)
random
otherwise
Informally, if at least one of recovered shares s
contains
the pth bit of r
i
(this can be verified by checking that the
value of th attribute is not the same in the pth record of
the database -- see construction in section 4.2), then this
106
bit is indeed to the pth bit of the secret r
i
. Otherwise, the
retriever must guess the pth bit randomly.
Once a candidate r
i
is constructed, the retriever checks
whether H
3,i
(r
i
) = u
i
. If not, the missing bits must have
been guessed incorrectly, and the retriever has to try another
choice for these bits. If H
3,i
(r
i
) = u
i
, then the retriever
decrypts the data attribute
y
i
= prg
2,i
(H
4,i
(r
i
)) z
i
.
The obfuscation algorithm of section 4.2 guarantees that
the number of missing bits is exactly equal to the number of
records satisfied by the query P. This provides the desired
group privacy property. If the retriever supplies a query
which is satisfied by a single record, then he will only have
to guess one bit to decrypt the data attributes. If a query is
satisfied by two records, then two bits must be guessed, and
so on. For queries satisfied by a large number of records,
the number of bits to guess will be infeasible large.
4.4
Example
Consider a toy airline passenger database with 4 records,
where the query attributes are "Last name" and "Flight,"
and the data attribute (in bold) is "Purchase details."
Last name
Flight
Purchase details
Smith
88
Acme Travel, Visa 4390XXXX
Brown
500
Airline counter, cash
Jones
88
Nonrevenue
Smith
1492
Travel.com, AmEx 3735XXXX
Because N = 4, we need to create a 4-bit secret to protect
each data attribute. (4-bit secrets can be easily guessed,
of course. We assume that in real examples N would be
sufficiently large, and use 4 records in this example only to
simplify the explanations.) Let =
1
2
3
4
be the secret
for the first data attribute, and , , the secrets for the
other data attributes, respectively.
For simplicity, we use a special symbol "?" to indicate
the missing bits that the retriever must guess. In the actual
construction, each of these bits is equal to 0, but the retriever
knows that he must guess the ith bit of the jth share if the
value of the jth attribute in the current record is equal to
the value of the jth attribute in the ith record.
Consider the first record. Each of the two query attributes,
"Last name" and "Flight," encrypts a 4-bit share. The share
encrypted with the value of the "Last name" attribute (i.e.,
"Smith") is missing the 1st and 4th bits because the 1st and
4th records in the database have the same value of this attribute
. (Obviously, all shares associated the ith record have
the ith bit missing). The share encrypted with the value of
the "Flight" attribute is missing the 1st and 3rd bits.
H
111
("Smith"), prg
1,1,1
(H
211
("Smith")) (?
2
3
?),
H
112
("88"), prg
1,1,2
(H
212
("88")) (?
2
?
4
),
H
31
(
1
2
3
4
), prg
2,1
(H
41
(
1
2
3
4
)) ("Acme. . . ")
H
121
("Brown"), prg
1,2,1
(H
221
("Brown")) (
1
?
3
4
),
H
122
("500"), prg
1,2,2
(H
222
("500")) (
1
?
3
4
),
H
32
(
1
2
3
4
), prg
2,2
(H
42
(
1
2
3
4
)) ("Airline. . . ")
H
131
("Jones"), prg
1,3,1
(H
231
("Jones")) (
1
2
?
4
),
H
132
("88"), prg
1,3,2
(H
232
("88")) (?
2
?
4
),
H
33
(
1
2
3
4
), prg
2,3
(H
43
(
1
2
3
4
)) ("Nonrev. . . ")
H
141
("Smith"), prg
1,4,1
(H
241
("Smith")) (?
2
3
?),
H
142
("1492"), prg
1,4,2
(H
242
("1492")) (
1
2
?
4
),
H
34
(
1
2
3
4
), prg
2,4
(H
44
(
1
2
3
4
)) ("Travel.com. . . ")
Suppose the retriever knows only that the flight number is
88. There are 2 records in the database that match this predicate
. From the first obfuscated record, he recovers ?
2
?
4
and from the third obfuscated record, ?
2
?
4
. The retriever
learns which bits he must guess by computing H
2i2
("88") for
1 i 4, and checking whether the result is equal to v
i
2
from the ith obfuscated record. In both cases, the retriever
learns that he must guess 2 bits (1st and 3rd) in order to
reconstruct the secret and decrypt the data attribute.
Now suppose the retriever knows that the flight number
is 88 and the last name is Smith. There is only 1 record
in the database that satisfies this predicate. From the first
part of the first obfuscated record, the retriever can recover
?
2
3
?, and from the second part ?
2
?
4
(note how the
shares overlap). Combining them, he learns ?
2
3
4
, so he
needs to guess only 1 bit to decrypt the data attribute.
It may appear that this toy example is potentially vulnerable
to a dictionary attack, since it is conceivable that all combinations
of last names and flight numbers can be efficiently
enumerated with enough computing power. Note, however,
that this "attack" does not violate the definition of secure
obfuscation because the retriever must supply the name-flight
pair before he can recover the purchase details. Therefore
, the obfuscated database is only accessed via queries
permitted by the privacy policy. In databases where values
are drawn from a large set, even this "attack" is infeasible.
4.5
Efficiency
The algorithm of section 4.2 produces obfuscations which
are a factor of (N ) larger than original databases. Thus,
while our results establish feasibility of database obfuscation
and group privacy, they are not directly applicable to real-world
databases. This appears to be a recurring problem in
the field of database privacy: the cryptography community
has very strict definitions of security but loose notions of
efficiency (typically polynomial time and space), whereas the
database community has very strict efficiency requirements
but loose security (typically heuristic or statistical). As a
result, many proposed schemes are either too inefficient, or
too insecure for practical use.
A possible compromise might be to start with a provably
secure but inefficient construction and employ heuristic techniques
to improve its efficiency. In this spirit, we now propose
some modifications to reduce the size of the obfuscated
database without providing a security proof. The presentation
is informal due to lack of space; see the full version of
the paper for a more rigorous version.
The obfuscation algorithm is modified as follows. For each
record i, we split r
i
into
N
k
"blocks" of k bits each, padding
the last block if necessary (k is the security parameter).
Instead of generating the bits randomly, we create a binary
tree of depth log
N
k
. A key of length k is associated with each
node of the tree, with the property the two "children" keys
are derived from the "parent" key (e.g., using a size-doubling
pseudo-random generator). This is similar to a Merkle tree
in which keys are derived in the reverse direction. The edge
of tree (minus the padding of the last block) is r
i
.
Let us denote the j
th
attribute of the i
th
record by i, j .
Say that i, j is entitled to the secret bit r
i
j
if x
ij
= x
i
j
,
and i, j is entitled to an entire block B if it is entitled
to each secret bit r
i
j
in that block. Intuitively, if an entire
block is entitled, then we encode it efficiently using the
"reverse Merkle" tree described above; if it is partially entitled
, then we fall back on our original construction. Thus,
107
let N
ij
be the minimal set of nodes in the tree which are
sufficient for reconstructing all entitled blocks (i.e., every
entitled block has among its parents an element of N
ij
),
and only these blocks. Then the share s
ij
consists of (a
suitable encoding of) N
ij
together with the remaining bits
r
i
j
to which i, j is entitled. These are the entitled bits
from any block which also includes non-entitled bits.
In the worst case, this algorithm does not decrease the
blowup in the size of the obfuscated database. This occurs
when for every query attribute j of every record i, there are
(N ) records i
for which the value of the query attribute
is the same, i.e., x
ij
= x
i
j
. If we assume a less pathological
database, however, we can get a better upper bound.
If there is a threshold t such that for any (i, j) there are at
most t records i
for which x
ij
= x
i
j
, then the size of the
key material (which causes the blowup in the size of the obfuscated
database) is O(mN t(k log
N
k
)) bits (recall that m
is the number of attributes). This bound is tight only for
small values of t, and the new algorithm does no worse than
the original one even when t = (N ). When we consider
that each of the mN entries of the original database is several
bits long, the size of the obfuscated database could be
acceptably small for practical use.
It must be noted that this obfuscation reveals the size of
the share, and thus, for a given attribute of a given record, it
leaks information about the number of other records whose
attribute value is the same (but not which records they are).
This opens two research questions:
- Is there a provably secure database obfuscation algorithm
that produces obfuscations that are smaller than O(N
2
).
- Can the heuristic described in this section be improved to
obtain acceptable lower bounds in the worst case?
ARBITRARY PREDICATES OVER EQUALITIES ON ATTRIBUTES
We now consider queries formed by taking an arbitrary
predicate P over m boolean variables b
1
, b
2
. . . b
m
, and substituting
(X
j
= x
j
) for b
j
, where X
j
is a query attribute,
and x
j
X {} is a candidate value for this attribute,
drawn from the domain X of query attribute values. The
special value denotes that the value of the X
j
attribute is
ignored when evaluating the predicate. The class of queries
considered in section 4 is a partial case of this definition,
where P =
1jm
b
j
. The group-exponential property is
similar to definition 2 except for the domain of P.
Let C be a boolean circuit computing P. We assume that C
is a monotone circuit, i.e., a poly-size directed acyclic graph
where each node is an AND, OR or FANOUT gate. AND
and OR gates have two inputs and one output each, while
FANOUT gates have one input and two outputs. Circuit
C has m inputs, one per each query attribute. Below, we
show how to generalize our obfuscation technique to non-monotone
circuits.
Obfuscation algorithm.
The algorithm is similar to the
one in section 4, and consists of generating a random secret
to encrypt each data attribute, splitting this secret into
(unequal) shares, and encrypting these shares under the keys
derived from the values of query attributes.
As before, let H
: {0, 1}
{0, 1}
k
be a set of random
hash functions and prg
,
: {0, 1}
k
{0, 1}
a set of
pseudo-random generators.
For each record
i
in the database, do the following:
Generate a block of uniformly random bits {r
ilEt
},
where 1 l N , E ranges over all edges of the circuit
C, and 1 t k, where k is the length of the hash
functions' output. Denote
r
iEt
= r
i
1Et
||r
i
2Et
|| . . . ||r
iN Et
-r
ilE
= r
ilE
1
||r
ilE
2
|| . . . ||r
ilEk
Then, for each query attribute X
j
:
Output v
ij
= H
1,i,j
(x
ij
)
Let E
j
be the input edge in the circuit C whose
input is the X
j
= x
j
test. Define the bits of the
corresponding share s
iljt
= r
ilE
j
t
if x
ij
= x
lj
, and
0 otherwise. Encrypt the resulting share using a
key derived from x
ij
, i.e., output
w
ij
= prg
1,i,j
(H
2,i,j
(x
ij
)) (
s
i
1j
||-s
i
2j
|| . . . ||-s
iN j
).
Let E
out
be the output edge in the circuit C. Output
u
i
= H
3,i
(r
iE
out
0
)
Output z
i
= prg
2,i
(H
4,i
(r
iE
out
0
))
y
i
.
The previous procedure obfuscated only the output
edge of C. Repeat the following step recursively for
each gate G C, whose output edge (or both of whose
output edges, for a FANOUT gate) have been obfuscated
. Stop when all edges have been obfuscated:
If G is an AND gate, let E
0
and E
1
be the input
edges and E the output edge. For each l, set
-r
ilE
0
= -r
ilE
1
= -r
ilE
.
If G is an OR gate, then, for each l, generate
random -r
ilE
0
R
{0, 1}
k
and set -r
ilE
1
= -r
ilE
0
-r
ilE
.
If G is a FANOUT gate, let E
0
and E
1
be the
output edges and E the input edge. For each l,
generate random -r
ilE
0
, -r
ilE
1
R
{0, 1}
k
and output
ilE
0
= H
5,i,l,E
0
(-r
ilE
) -r
ilE
0
and
ilE
1
= H
5,i,l,E
1
(-r
ilE
) -r
ilE
1
Retrieval algorithm.
Let Q be the query predicate in
which specific values of x
j
or have been plugged into all
X
j
= x
j
expressions in the leaves of the circuit C.
The retrieval algorithm consists of two functions:
C
ob
(OB
gp
(D), x, i, j), which enables the retriever to check
whether the jth query attribute of the ith record is equal to
x
, and R
ob
(OB
gp
(D), Q, i), which attempts to retrieve the
value of the obfuscated data attribute in the ith record.
Define C
ob
(OB
gp
(D), x, i, j) = 1 if H
1,i,j
(x) = v
ij
and 0
otherwise.
Evaluate Q(
i
) using C
ob
. If Q
OB
gp
(
i
), then
R
ob
(OB
gp
(D), Q, i) =.
For each l and each circuit edge E, set -r
ilE
=?? . . .?
(i.e., none of the bits of the secret are initially known).
For each query attribute j, let E
j
be the input edge of
the circuit associated with the equality test for this attribute
. If Q contains this test, i.e., if Q contains X
j
=
108
x
j
for some candidate value x
j
(rather than X
j
= ),
then set (
s
i
1j
|| . . . ||-s
iN j
) = w
ij
prg
1,i,j
(H
2,i,j
(x
ij
)),
i.e., decrypt the secret bits with the key derived from
the value of the jth attribute.
For each l, if C
ob
(x
ij
, l, j
) = 0, then set -r
ilE
j
= s
ilj
,
i.e., use only those of the decrypted bits that are true
bits of the secret -r
ilE
.
So far, only the input gates of the circuit have been
visited. Find a gate all of whose input edges have
been visited, and repeat the following step for every
gate until the output edge E
out
has been visited.
If E is the output of an AND gate with inputs E
0
and E
1
, then, for each l, if -r
ilE
0
=?, set -r
ilE
=
-r
ilE
0
; if -r
ilE
1
=?, set -r
ilE
= -r
ilE
1
.
E
is the output of an OR gate with inputs E
0
and E
1
. For each l, if -r
ilE
0
=? and -r
ilE
1
=?, set
-r
ilE
= -r
ilE
0
-r
ilE
1
.
E
is the output of a FANOUT gate with input
E
0
. For each l, if -r
ilE
0
=?, set r
ilE
=
ilE
0
H
5,i,l,E
0
(-r
ilE
0
).
For each l, if r
ilE
out
0
=?, this means that the corresponding
secret bit must be guessed. Choose random
r
ilE
out
0
R
{0, 1}.
If H
3,i
(r
iE
out
0
) = u
i
, this means that the retriever successfully
reconstructed the secret. In this case, define
R
ob
(OB
gp
(D), Q, i) = prg
2,i
(H
4,i
(r
iE
out
0
)) z
i
. Otherwise
, define R
ob
(OB
gp
(D), Q, i) =.
Theorem 3. The obfuscation algorithm for arbitrary
predicates over equalities on attributes satisfies the virtual
black-box property.
5.1
Obfuscating non-monotone circuits
Given a non-monotone circuit C, let C be the monotone
circuit whose leaves are literals and negated literals formed
by "pushing down" all the NOT gates. Observe that C has
at most twice as many gates as C. Also, C can be considered
a monotone circuit over the 2m predicates X
1
= x
1
, X
2
=
x
2
, . . . , X
m
= x
m
, X
1
= x
1
, X
2
= x
2
, . . . X
m
= x
m
. Observe
that a predicate of the form X
j
= x
j
is meaningful only
when x
j
= x
ij
for some record i. This is because if x
j
= x
ij
for any record i, then X
j
= x
j
matches all the records.
Hence there exists a circuit C
(obtained by setting the leaf
in C corresponding to the predicate X
j
= x
j
to true) that
evaluates to the same value as C for every record in the
database.
Given that x
j
= x
ij
for some record i, the predicate
X
j
= x
j
is equivalent to the predicate X
j
= x
ij
for some
value of i. C can thus be viewed as a monotone circuit over
the m + mN attribute equality predicates X
1
= x
1
, X
2
=
x
2
, . . . , X
m
= x
m
,
and X
j
= x
ij
for each i and j. It follows
that a database D with N records and m columns can be
transformed into a database D
with N records and m+mN
columns such that obfuscating D over the circuit C is equivalent
to obfuscating D over the monotone circuit C.
ALTERNATIVE PRIVACY POLICIES
In general, a privacy policy can be any computable, possibly
randomized, joint function of the database and the
query. Clearly, it may be useful to consider generalizations
of our privacy policies in several directions.
First, we discuss alternatives to definition 2 that may
be used to model the requirement that accessing individual
records should be easy, but mass harvesting of records
should be hard. To motivate this discussion, let us consider
a small database with, say, 10 or 20 records. For such
a database, the group-exponential property is meaningless.
Even if all records match the adversary's query, he can easily
try all 2
10
or 2
20
possibilities for the random bits r
ik
because database accesses are noninteractive.
This does not in any way violate our definition of privacy.
Exactly the same attack is possible against the ideal functionality
, therefore, the simulation argument goes through,
showing that the obfuscated database leaks no more information
than the ideal functionality. It is thus natural to seek
an alternative privacy definition that will make the above attack
infeasible when N is small (especially when N < k, the
security parameter).
Our construction can be easily modified to support a wide
variety of (monotonically decreasing) functions capturing
the dependence between the probability of the ideal functionality
returning the protected attributes and the number
of records matching the query. For example, the following
threshold ideal functionality can be implemented using
a threshold (n-t)-out-of-n secret sharing scheme [24].
- C
D
(x, i, j) is 1 if x = x
ij
and 0 otherwise, where 1 i
N,
1 j m.
- R
D
(P) =
1iN
{ i,
i
}, where
i
=
y
i
if P(
i
) and |D
[P]
| t
if P(
i
) and |D
[P]
| > t
if P(
i
)
The adversary can evaluate the query if there are at most t
matching records, but learns nothing otherwise. The details
of the construction are deferred to the full version.
We may also consider which query language should be permitted
by the privacy policy. We demonstrated how to obfuscate
databases in accordance with any privacy policy that
permits evaluation of some predicate consisting of equality
tests over database attributes. Such queries can be considered
a generalization of "partial match" searches [23], which
is a common query model in the database literature. Also,
our algorithms can be easily modified to support policies
that forbid some attributes from having as a legal value,
i.e., policies that require the retriever to supply the correct
value for one or more designated attributes before he can
extract anything from the obfuscated database.
It is worth asking if we can allow predicates over primitives
looser than exact attribute equality (e.g., proximity queries
of [15] are an interesting class). We present strong evidence
that this is impossible with our privacy definitions. In fact,
even using ideal functionalities (IF) that are more restrictive
than the one we have used does not seem to help. Recall that
the IF considered in section 4 consists of two functions: C
D
(it tells the retriever whether his guess of a particular query
attribute value is correct) and R
D
(it evaluates the query
with the inverse-exponential probability). We will call this
IF the permissive IF.
We define two more IFs. The strict IF is like the permissive
IF except that it doesn't have the function C. The
semi-permissive IF
falls in between the two.
It, too,
109
doesn't have the function C, but its retrieval function R
leaks slightly more information. Instead of the same symbol
, function R of the semi-permissive IF gives different responses
depending on whether it failed to evaluate the query
because it matches no records (no-matches) or because it
matches too many records, and the probability came out to
the retriever's disadvantage (too-many-matches).
Define R
D
(P) as
1iN
R
(P, i), where R
is as follows:
If P(
i
), then R
(P, i) = .
If P(
i
), then R
(P, i) =
y
i
with probability 2
-|D
[P]
|
and with probability 1 - 2
-|D
[P]
|
.
Observe that if, for any privacy policy allowing single-attribute
equality tests, i.e., if all queries of the form X
j
=
x
j
are permitted, then the semi-permissive IF can simulate
the permissive IF. Of course, the permissive IF can always
simulate the semi-permissive IF.
We say that a privacy policy leaks query attributes if all
x
ij
can be computed (with overwhelming probability) simply
by accessing the corresponding ideal functionality I
D
,
i.e., there exists a probabilistic poly-time oracle algorithm
A
s.t., for any database D, P(A
I
D
,
O
(i, j) = x
ij
) 1 - (k).
Note that the order of quantifiers has been changed: the
algorithm A is now independent of the database. This captures
the idea that A has no knowledge of the specific query
attributes, yet successfully retrieves them with access only
to the ideal functionality. Such a policy, even if securely
realized, provides no meaningful privacy.
We have the following results (proofs omitted):
If X = {1, 2, . . . M } and queries consisting of conjunctions
over inequalities are allowed, then the semi-permissive
IF leaks query attributes. Each of the x
ij
can be separately computed by binary search using
queries of the form X
j
x
low
X
j
x
high
.
If arbitrary PPT-computable queries are allowed, then
even the strict IF leaks query attributes.
Note that a policy does not have to leak all query attributes
to be intuitively useless or vacuous. For instance,
a policy which allows the retriever to evaluate conjunctions
of inequalities on the first m - 1 query attributes, and allows
no queries involving the last attribute, is vacuous for
the semi-permissive IF. Therefore, we give a stronger criterion
for vacuousness, which formalizes the notion that "all
information contained in the IF can be extracted without
knowing anything about the query attributes". Note that
the definition below applies to arbitrary privacy policies, for
it makes no reference to query or data attributes.
Definition 3. (Vacuous privacy policy) We say that
an ideal functionality
I
D
is vacuous if there exists an efficient
extractor Ext such that for any PPT algorithm A
there exists a simulator S so that for any database D:
|P(A
I
D
(1
k
) = 1) - P(S(Ext
I
D
(1
k
))) = 1)| = (k)
In other words, we first extract all useful information from
I
D
without any specific knowledge of the database, throw
away I
D
, and use the extracted information to simulate I
D
against an arbitrary adversary. As a special case, if Ext can
recover the entire database D from I
D
, then the functionality
can be simulated, because the privacy policy is required
to be computable and the simulator is not required to be
computationally bounded (if we consider only privacy policies
which are computable in probabilistic polynomial time,
then we can define vacuousness with a PPT simulator as
well). At the other extreme, the ideal functionality that
permits no queries is also simulatable: Ext simply outputs
nothing. The reader may verify that the IF in the all-but-one
-query-attribute example above is also vacuous.
Theorem 4. The strict ideal functionality that permits
arbitrary queries is vacuous.
Finally, we consider what happens if we use the strict IF
but don't increase the power of the query language. We
conjecture the existence of very simple languages, including
a language that contains only conjunctions of equality tests
on attributes, which are unrealizable even for single-record
databases in the sense that there is no efficient obfuscation
algorithm that would make the database indistinguishable
from the corresponding IF. This can be seen as justification
for the choice of the permissive, rather than strict IF for our
constructions.
conjecture 1. The strict IF for the following query language
cannot be realized even for single-record databases:
2k
i
=1
(X
2i-1
= x
2i-1
X
2i
= x
2i
) where i x
i
{0, 1}.
Note that the only constraint on the database is that its
size should be polynomial in the security parameter k, and
therefore we are allowed to have 2k query attributes.
We expect that a proof of this conjecture will also yield a
proof of the following conjecture:
conjecture 2. The strict IF for a query language consisting
of conjunction of equality tests on k query attributes
is unrealizable even for single-record databases.
These conjectures are interesting from another perspective
. They can be interpreted as statements about the impossibility
of circuit obfuscation in the random oracle model.
They also motivate the question: given a query language, it
is possible to achieve the group-exponential property with
the strict IF provided there exists an obfuscation algorithm
for this query language on a single record? In other words,
given a class of predicates over single records and an efficient
obfuscator for the corresponding circuit class, does
there exist an obfuscator for the entire database that realizes
the group-exponential ideal functionality for that query
language? We discuss this question in the full version of the
paper.
CONCLUSIONS
We introduced a new concept of database privacy, which
is based on permitted queries rather than secrecy of individual
records, and realized it using provably secure obfuscation
techniques. This is but a first step in investigating
the connection between obfuscation and database privacy.
While our constructions are secure in the "virtual black-box"
model for obfuscation, the blowup in the size of the
obfuscated database may render our techniques impractical
for large databases. Our query language permits any predicate
over equalities on database attributes, but other query
languages may also be realizable. We define group privacy in
terms of a particular ideal functionality, but there may be
110
other functionalities that better capture intuitive security
against "mass harvesting" queries. In general, investigating
which ideal functionalities for database privacy can be
securely realized is an important topic of future research.
Finally, all proofs in this paper are carried out in the random
oracle model. Whether privacy-via-obfuscation can be
achieved in the plain model is another research challenge.
REFERENCES
[1] D. Aucsmith. Tamper resistant software: an
implementation. In Proc. 1st International Workshop
on Information Hiding, volume 1174 of LNCS, pages
317333. Springer, 1996.
[2] B. Barak, O. Goldreich, R. Impagliazzo, S. Rudich,
A. Sahai, S. Vadhan, and K. Yang. On the
(im)possibility of obfuscating programs. In Proc.
Advances in Cryptology - CRYPTO 2001, volume 2139
of LNCS, pages 118. Springer, 2001.
[3] D. Beaver, J. Feigenbaum, J. Kilian, and P. Rogaway.
Locally random reductions: improvements and
applications. J. Cryptology, 10:1736, 1997.
[4] D. Boneh, G. Di Crescenzo, R. Ostrovsky, and
G. Persiano. Public key encryption with keyword
search. In Proc. Advances in Cryptology EUROCRYPT
2004, volume 3027 of LNCS, pages
506522. Springer, 2004.
[5] R. Canetti. Towards realizing random oracles: hash
functions that hide all partial information. In Proc.
Advances in Cryptology - CRYPTO 1997, volume 1294
of LNCS, pages 455469. Springer, 1997.
[6] R. Canetti, D. Micciancio, and O. Reingold. Perfectly
one-way probabilistic hash functions. In Proc. 30th
Annual ACM Symposium on Theory of Computing
(STOC), pages 131140. ACM, 1998.
[7] S. Chawla, C. Dwork, F. McSherry, A. Smith, and
H. Wee. Towards privacy in public databases. In Proc.
2nd Theory of Cryptography Conference (TCC),
volume 3378 of LNCS, pages 363385. Springer, 2005.
[8] B. Chor, E. Kushilevitz, O. Goldreich, and M. Sudan.
Private information retrieval. J. ACM, 45(6):965981,
1998.
[9] S. Chow, P. Eisen, H. Johnson, and P. van Oorschot.
White-box cryptography and an AES implementation.
In 9th Annual International Workshop on Selected
Areas in Cryptography (SAC), volume 2595 of LNCS,
pages 250270. Springer, 2003.
[10] S. Chow, P. Eisen, H. Johnson, and P. van Oorschot. A
white-box DES implementation for DRM applications.
In ACM Digital Rights Management Workshop,
volume 2696 of LNCS, pages 115. Springer, 2003.
[11] C. Collberg and C. Thomborson. Watermarking,
tamper-proofing, and obfuscation - tools for software
protection. IEEE Transactions on Software
Engineering, 28(8):735746, 2002.
[12] C. Collberg, C. Thomborson, and D. Low. A
taxonomy of obfuscating transformations. Technical
Report 148, Department of Computer Sciences, The
University of Auckland, July 1997.
[13] C. Collberg, C. Thomborson, and D. Low.
Manufacturing cheap, resilient, and stealthy opaque
constructs. In Proc. 25th ACM SIGPLAN-SIGACT
Symposium on Principles of Programming Languages
(POPL), pages 184196. ACM, 1998.
[14] D. Dean and A. Stubblefield. Using client puzzles to
protect TLS. In Proc. 10th USENIX Security
Symposium, pages 18. USENIX, 2001.
[15] Y. Dodis and A. Smith. Correcting errors without
leaking partial information. In Proc. 37th Annual
ACM Symposium on Theory of Computing (STOC),
pages 654663. ACM, 2005.
[16] C. Dwork and M. Naor. Pricing via processing or
combatting junk mail. In Proc. Advances in
Cryptology - CRYPTO 1992, volume 740 of LNCS,
pages 139147. Springer, 1993.
[17] M. Franklin and D. Malkhi. Auditable metering with
lightweight security. J. Computer Security,
6(4):237255, 1998.
[18] Y. Gertner, Y. Ishai, E. Kushilevitz, and T. Malkin.
Protecting data privacy in private information
retrieval schemes. In Proc. 30th Annual ACM
Symposium on Theory of Computing (STOC), pages
151160. ACM, 1998.
[19] O. Goldreich and R. Ostrovsky. Software protection
and simulation on oblivious rams. J. ACM,
43(3):431473, 1996.
[20] Y. Ishai, A. Sahai, and D. Wagner. Private circuits:
securing hardware against probing attacks. In Proc.
Advances in Cryptology - CRYPTO 2003, volume 2729
of LNCS, pages 463481. Springer, 2003.
[21] A. Juels and J. Brainard. Client puzzles: a
cryptographic defense against connection depletion. In
Proc. Network and Distributed System Security
Symposium (NDSS), pages 151165. The Internet
Society, 1999.
[22] B. Lynn, M. Prabhakaran, and A. Sahai. Positive
results and techniques for obfuscation. In Proc.
Advances in Cryptology - EUROCRYPT 2004, volume
3027 of LNCS, pages 2039. Springer, 2004.
[23] R. Rivest. Partial-match retrieval algorithms. SIAM
Journal of Computing, 5(1):1950, 1976.
[24] A. Shamir. How to share a secret. Communications of
the ACM, 22(11):612613, 1979.
[25] D. Song, D. Wagner, and A. Perrig. Practical
techniques for searches on encrypted data. In Proc.
IEEE Symposium on Security and Privacy, pages
4455. IEEE Computer Society, 2000.
[26] X. Wang and M. Reiter. Defending against
denial-of-service attacks with puzzle auctions. In Proc.
IEEE Symposium on Security and Privacy, pages
7892. IEEE Computer Society, 2003.
[27] H. Wee. On obfuscating point functions. In Proc. 37th
Annual ACM Symposium on Theory of Computing
(STOC), pages 523532. ACM, 2005.
111 | Database privacy;Obfuscation |
143 | On the Complexity of Computing Peer Agreements for Consistent Query Answering in Peer-to-Peer Data Integration Systems | Peer-to-Peer (P2P ) data integration systems have recently attracted significant attention for their ability to manage and share data dispersed over different peer sources. While integrating data for answering user queries, it often happens that inconsistencies arise, because some integrity constraints specified on peers' global schemas may be violated. In these cases, we may give semantics to the inconsistent system by suitably "repairing" the retrieved data, as typically done in the context of traditional data integration systems. However , some specific features of P2P systems, such as peer autonomy and peer preferences (e.g., different source trusting ), should be properly addressed to make the whole approach effective. In this paper, we face these issues that were only marginally considered in the literature. We first present a formal framework for reasoning about autonomous peers that exploit individual preference criteria in repairing the data. The idea is that queries should be answered over the best possible database repairs with respect to the preferences of all peers, i.e., the states on which they are able to find an agreement. Then, we investigate the computational complexity of dealing with peer agreements and of answering queries in P2P data integration systems. It turns out that considering peer preferences makes these problems only mildly harder than in traditional data integration systems. | INTRODUCTION
Peer-to-Peer (P2P ) data integration systems are networks
of autonomous peers that have recently emerged as an effective
architecture for decentralized data sharing, integration,
and querying. Indeed, P2P systems offer transparent access
to the data stored at (the sources of) each peer p, by
means of the global schema equipped with p for modeling
its domain of interest; moreover, pair of peers with the same
domain of interest one peer and the system is in charge of
accessing each peer containing relevant data separately, and
combining local results into a global answer by suitably exploiting
the mapping rules.
P2P systems can be considered the natural evolution of
traditional data integration systems, which have received
considerable attention in the last few years, and which have
already become a key technology for managing enormous
amounts of information dispersed over many data sources.
In fact, P2P systems have attracted significant attention
recently, both in the development of efficient distributed algorithms
for the retrieval of relevant information and for
answering user queries (see, e.g., [9, 21, 12, 13]), and in the
investigation of its theoretical underpinnings (see, e.g., [16,
3, 20, 11, 9, 5]).
In this paper, we continue along this latter line of research,
by investigating some important theoretical issues. In particular
, we consider an expressive framework where integrity
constraints are specified on peer schemas in order to enhance
their expressiveness, so that each peer can be in fact considered
a completely specified data integration system. In
this scenario, it may happen that data at different peers are
mutually inconsistent, i.e., some integrity constraints are violated
after the integration is carried out; then, a "repair"
for the P2P system has to be computed [5, 17]. Roughly
speaking, repairs may be viewed as insertions or deletions
of tuples at the peers that are able to lead the system to a
consistent state.
Our aim is to deal with data integration in P2P systems,
by extending some of the ideas described in previous studies
on merging mutually inconsistent databases into a single
consistent theory [2, 14] and on repairing individual data
integration systems [8, 6, 4, 10].
36
Indeed, in order to be effective in this framework, the repair
approach should consider the peculiarities of P2P systems
and, specifically, the following two issues:
In practical applications, peers often have an a-priori
knowledge about the reliability of the sources that, in
turn, determines their criteria for computing repairs.
That is, peers will rarely delete tuples coming from
highly reliable sources, and will try to solve conflicts
by updating the less reliable sources only.
Peers are autonomous and not benevolent: they rarely
disregard their individual preferences in order to find
an agreement with other peers on the way the repair
should be carried out. Therefore, the presence of possibly
contrasting interests of selfish peers should be
accounted for, when answering user queries.
Despite the wide interest in this field, none of the approaches
in the literature considered the issue of modeling the autonomy
of the peers in providing a semantics for the system,
and therefore they implicitly assume that all the peers act
cooperatively in the network. Moreover, the possibility of
modeling peer preferences has been rarely considered in previous
studies, even though it has been widely recognized to
be a central issue for the design of quality-aware integration
systems (cf. [17]). Indeed, the first and almost isolated
attempt is in [5], where the authors considered trust relationships
among peers in a simplified setting in which the
system does not transitively propagate information through
peers. Actually, an extension to the case of transitive propagations
is also argued, but peers autonomy is not considered,
and query answering is undecidable in presence of loops.
In this paper, we face the above issues by introducing
a formal framework for reasoning about autonomous peers
that exploit individual preference criteria in repairing data.
In summary, our contributions are the following:
We preliminary introduce a framework for P2P data
integration systems, where each peer is equipped with
integrity constraints on its global schema. The model
is simple yet very expressive, since each peer is assumed
to be in turn a data integration system. The
semantics of a P2P system is defined in terms of suitable
databases for the peers, called models. We show
that checking whether a system has a model can be
done efficiently.
We propose an approach to the repair of inconsistent
P2P systems that focuses on data stored at the
sources, rather than on the global schema (following
the approach described by [15] for the standard data
integration setting).
This is particularly suited for
dealing with peers, as their preferences are typically
expressed over the sources. Indeed, if repairs were considered
on the global schema, suitable reformulations
and translation of the preferences would be required.
We investigate the effect of considering individual preferences
on the semantics of P2P database integration
systems. The idea is that queries should be answered
over the best possible database repairs with respect
to the preferences of all peers, i.e., over the states on
which they are able to find an agreement. Unfortu-nately
, but not surprisingly, it turns out that considering
autonomous peers gives rise to scenarios where
they are not able to find any agreement on the way
the integration should be done.
The above result motivates the subsequent study of
the complexity of dealing with peer agreements and
of answering queries in such P2P data integration systems
. We show that checking whether a given database
is an agreed repair is a difficult task, since it is complete
for the class co-NP. Moreover, the complexity of
computing an agreement turns out to be complete for
the functional class FPNP. Finally, we study the complexity
of computing consistent answers and show that
this problem is
P
2
-complete. It follows that our approach
for handling preferences in P2P systems is just
mildly harder than the basic data integration framework
, where in fact query answering lies at the first
level of the polynomial hierarchy [8], as well.
The rest of the paper is organized as follows.
In Section
2, we briefly present some preliminaries on relational
databases. In Section 3, we introduce a simple formalization
of P2P data integration systems and in the subsequent
section we enrich it to take care of peers' preferences. The
computational complexity of the concept of agreement in
query answering is studied in Section 5. Finally, in Section 6
we draw our conclusions.
PRELIMINARIES ON RELATIONAL DATABASES
We recall the basic notions of the relational model with
integrity constraints. For further background on relational
database theory, we refer the reader to [1].
We assume a (possibly infinite) fixed database domain
whose elements can be referenced by constants c
1
,. . . , c
n
under the unique name assumption, i.e. different constants
denote different objects. These elements are assumed to be
shared by all the peers and are, in fact, the constants that
can appear in the P2P system.
A relational schema (or simply schema)
RS is a pair
, , where: is a set of relation symbols, each with an
associated arity that indicates the number of its attributes,
and is a set of integrity constraints, i.e., (first-order) assertions
that have to be satisfied by each database instance.
We deal with quantified constraints, i.e., first order formulas
of the form:
~x.
l
i=1
A
i
~y.
m
j=1
B
j
n
k=1
k
,
(1)
where l+m > 0, n 0, A
1
, . . . A
l
and B
1
, . . . B
m
are positive
literals,
1
, . . .
n
are built-in literals, and ~
x and ~
y are lists
of distinct variables.
Actually, to keep things simple, we shall assume throughout
the paper that ~
y is empty, thereby dealing with universally
quantified constraints. We recall here that this kind of
constraint covers most of the classical constraints issued on
a relational schema, such as keys, functional dependencies,
and exclusion dependencies. A brief discussion on how to
generalize the results in the paper to other classes of constraints
is reported in Section 6.
A database instance (or simply database)
DB for a schema
RS = , is a set of facts of the form r(t) where r is a
relation of arity n in and t is an n-tuple of constants
from . We denote as r
DB
the set
{t | r(t) DB}.
A database
DB for a schema RS is said to be consistent
with
RS if it satisfies (in the first order logic sense) all constraints
expressed on
RS.
37
Figure 1: The
P2P system P
r
in Example 1.
A relational query (or simply query ) over
RS is a formula
that is intended to extract tuples of elements from
the underlying domain of constants .
We assume that
queries over
RS = , are Unions of Conjunctive Queries
(UCQs), i.e., formulas of the form
{~x | ~y
1
.
conj
1
(~
x, ~
y
1
)
~y
m
.
conj
m
(~
x, ~
y
m
)
} where, for each i {1, . . . , m},
conj
i
(~
x, ~
y
i
) is a conjunction of atoms whose predicate symbols
are in , and involve ~
x = X
1
, . . . , X
n
and ~
y
i
=
Y
i,1
, . . . , Y
i,n
i
, where n is the arity of the query, and each
X
k
and each Y
i,
is either a variable or a constant in .
Given a database
DB for RS, the answer to a UCQ Q
over
DB, denoted Q
DB
, is the set of n-tuples of constants
c
1
, . . . , c
n
such that, when substituting each X
i
with c
i
, the
formula
~y
1
.
conj
1
(~
x, ~
y
1
)
~y
m
.
conj
m
(~
x, ~
y
m
) evaluates
to true on
DB.
DATA INTEGRATION IN P2P SYSTEMS
In this section, we introduce a simple framework for dealing
with P2P systems. The model is not meant to be a novel
comprehensive formalization, since our aim here is to face
the problem of finding agreement among peers rather than
to investigate new syntactic modeling features.
Therefore, our approach takes basically the same perspective
as [9, 11, 5, 17].
3.1
Basic Framework
A P2P system
P is a tuple P, I, N , map , where P is
a non-empty set of distinct peers and
I, N and map are
functions whose meaning will be explained below.
First,
each peer p P is equipped with its own data integration
system
I(p), which is formalized as a triple G
p
, S
p
, M
p
.
Basically,
S
p
is meant to denote the set of sources to
which p is allowed to access and is in fact modeled as a
relational schema of the form
S
p
=
p
, , i.e., there are
no integrity constraints on the sources. The structure of
the global schema is, instead, represented by means of the
schema
G
p
=
p
,
p
, whereas the relationships between
the sources and the global schema are specified by
M
p
,
which is a set of local mapping assertions between
G
p
and
S
p
.
We assume that each assertion is of the form Q
S
p
Q
G
p
,
where Q
S
p
and Q
G
p
are two conjunctive queries of the same
arity over the source schema
S
p
and the peer schema
G
p
,
respectively.
Example 1 Let us introduce three peers, namely p
1
, p
2
,
and p
3
, that constitute the P2P scenario that will be used
as a running example throughout this paper to illustrate
technical definitions.
The global schema
G
p
1
of peer p
1
consists of the relation
predicate secretary (Employee, Manager ) (without constraints
), the source schema
S
p
1
consists of the relation symbol
s
1
, and the set
M
p
1
of the local mapping assertions is
{X, Y | s
1
(X, Y )}
{X, Y | secretary (X, Y )}.
As for peer p
2
, the schema
G
p
2
consists of the relation
financial (Employee, Manager ) (without constraints),
the source schema consists of the relation symbol s
2
, and
M
p
2
=
{X, Y | s
2
(X, Y )}
{X, Y | financial(X, Y )}.
The schema
G
p
3
of peer p
3
consists of the relations
employee(Name, Dept) and boss(Employee, Manager ),
whose set of constraints contains the assertions (quantifiers
are omitted) employee (X, Y ) boss(X
1
, Y
1
)
X = Y
1
and
boss(X, Y ) boss(X
1
, Y
1
)
Y
1
= X, stating that managers
are never employees; the source schema
S
p
3
comprises the
relation symbols s
3
; and, the set of the local mapping assertions
is
{X, Y | s
3
(X, Y )}
{X, Y | employee(X, Y )}.
P
Each peer p P in a P2P system P = P, I, N , map is
also equipped with the neighborhood function
N providing
a set of peers
N (p) P - {p} containing the peers (called
neighbors) who potentially have some information of interest
to p. Intuitively, the neighborhood relation determines the
structure of a P2P system
P. Such a structure is better
described by the dependency graph G(
P) of P, i.e., by a
directed graph having P as its set of vertices and {(p, q) |
q P p N (q)} as its set of edges.
In particular, a peer q is in N (p) iff p is interested in the
data exported by q by means of its global schema, i.e., some
of the global relations of p can be populated by means of
the data coming from q besides the data coming from the
sources of p itself. To this aim, map(p) defines the set of
peer mapping assertions of p.
Each assertion is an expression of the form Q
q
Q
p
,
where the peer q N (p) is a neighbor of p, and Q
q
and Q
p
are two conjunctive queries of the same arity over schemas
G
q
and
G
p
, respectively.
Example 1 (contd.) Let
P
r
=
P
r
, I
r
, N
r
, map
r
be a
P2P system, where P
r
consists of three peers p
1
, p
2
and p
3
,
such that
N
r
(p
1
) =
N
r
(p
2
) =
and N
r
(p
3
) =
{p
1
, p
2
}.
Figure 1 summarizes the structure of the system
P
r
by
showing, for each peer, its global schema, its source schema,
and its local and peer mapping assertions. In particular,
notice that the mapping assertions are such that: map(p
1
) =
map(p
2
) =
, and map(p
3
) =
{X, Y | financial(X, Y ))}
{X, Y | boss(X, Y )} {X, Y | secretary(X, Y )}
{X, Y |
boss(X, Y )}.
P
38
A source database for a P2P system
P is a function D
assigning to each peer p P such that I(p) = G
p
, S
p
, M
p
a database instance
D(p) for S
p
.
A global database for
P is a function B assigning to each
peer p a database instance B(p) for G
p
.
Usually, we are
interested in global databases that can be "retrieved" from
a given source, as formalized below.
Given a source database
D for P, a retrieved global
database for
D is a global database B that satisfies the
mapping assertions
M
p
of each peer p, i.e., B is such that:
p P and (Q
S
p
Q
G
p
)
M
p
, it is the case that
Q
D(p)
S
p
Q
B(p)
G
p
.
We denote by ret (
P, D) the set of all the retrieved global
databases for
D in the system P.
Notice that in the definition above we are considering
sound mappings: data retrieved from the sources by the
mapping views are assumed to be a subset of the data that
satisfy the corresponding global relation. This is a classical
assumption in data integration, where sources in general do
not provide all the intended extensions of the global schema,
hence extracted data are to be considered sound but not
necessarily complete.
Example 1 (contd.) Let
D
r
be a source database for
the P2P system
P
r
such that
D
r
(p
1
) is
{s
1
(Albert, Bill)},
D
r
(p
2
) consists of
{s
2
(John, Mary), s
2
(Mary, Tom)}, and
D
r
(p
3
)
=
{s
3
(Mary, D1)}.
Consider also the global
database
B
r
such that
B
r
(p
1
) =
{secretary (Albert, Bill)},
B
r
(p
2
) =
{financial(John, Mary), financial(Mary, Tom)}
and
B
r
(p
3
) =
{employee(Mary, D1)}. Then, it is easy
to see that
B
r
is a retrieved database for
D
r
in
P
r
, i.e.,
B
r
ret(P
r
, D
r
).
Note that a global database
B whose peer schema for some
peer p {p
1
, p
2
, p
3
} is a superset of B
r
(p) is in ret (P
r
, D
r
)
as well - we simply say that
B is a superset of B
r
.
P
3.2
Models of Peer-to-Peer Systems
Given a source database
D, it is particular important
to investigate whether it is possible to retrieve from
D
a database which satisfies the semantics of the network.
Therefore, we next define a suitable notion of model for a
P2P system. The approach has been inspired by the au-toepistemic
approach of [9]; in particular, we assume that
peers propagate through mapping assertions only the values
they really trust.
Definition 2 Let
P = P, I, N , map be a P2P system, p
P a peer with I(p) = G
p
, S
p
, M
p
and
G
p
=
p
,
p
, and
D a source instance for P. Then, a p-model for P w.r.t. D is
a maximal nonempty set of global databases
M ret(P, D)
such that:
1. for each
B M, B(p) satisfies the constraints in
p
,
and
2. for each assertion Q
q
Q
p
map(p), it holds:
B M
Q
B (q)
q
B M
Q
B (p)
p
.
P
Thus, according to Condition 1, any databases in the p-model
satisfies all the integrity constraints issued over the
global schema of p; moreover, Condition 2 guarantees that
peers communicate only those values that belong to all models
, i.e., a cautious approach to the propagation has been
pursued. Finally we point out that, as for local mapping assertions
, peer mapping assertions are assumed to be sound.
Now, given that each peer singles out its models, a notion
of model for the whole system can be easily stated.
Definition 3 Let
P = P, I, N , map be a P2P system.
A model for
P w.r.t. D is a maximal nonempty set M
ret (
P, D) of global databases such that, for each p P , M
is a p-model. If a model for P w.r.t. D exists, we say that
D satisfies P, denoted by D |= P.
P
For
instance,
in
our
running
example,
D
r
does
not
satisfy
P
r
;
indeed,
the
peer
mapping
assertions
constrain the schema of p
3
to contain in every
global
database
(retrieved
from
D
r
)
the
tuples
boss(Albert, Bill), boss(John, Mary), boss(Mary, Tom),
and
employee (Mary, D1) that violate the integrity constraints
over p
3
, since Mary results to be both an employee and a
manger. Therefore, retrieving data from
D
r
leads to an inconsistent
scenario.
We conclude by noticing that deciding whether a P2P
system admits a model can be done efficiently. The result
can be proven by modifying the techniques in [9], in order
to first evaluate all the mappings in the network and then
check for the satisfaction of the integrity constraints over
peer schemas.
Theorem 4
Let
P = P, I, N , map be a P2P system, and
D be a database instance for P. Then, deciding whether
there is a model for
P w.r.t. D, i.e., D |= P, is feasible in
polynomial time.
DEALING WITH AUTONOMOUS PEERS
As shown in our running example, in general data stored
in local and autonomous sources are not required to satisfy
constraints expressed on the global schema (for example
when a key dependency on
G is violated by data retrieved
from the sources). Thus, a P2P system may be unsatisfiable
w.r.t. a source database
D. In this section, we face the problem
of solving inconsistencies in P2P systems. Specifically,
we introduce a semantics for "repairing" a P2P system. To
this aim, we first provide a model for peer preferences, and
then show the impact of these individual preferences on the
cost of reaching a global agreed repair.
4.1
Peer Preferences and Repairs
Let
P = P, I, N , map be a P2P system, and D be a
source database instance for
P. Next, we define a repair
weighting function w
p
(P,D)
for each peer p, encoding its preferences
on candidate repairs of
D. Formally, w
p
(P,D)
is a
polynomially-computable function assigning, to each source
database instance
D, a natural number that is a measure of
the preference of p on having D as a repair for D (the lower
the number, the more preferred the repair).
As a quite simple, yet natural example of weighting function
, we can consider the evaluation of the number of deletions
performed to the peer's sources.
In this case, we
have that w
p
(P,D)
(
D ) = |D (p) - D(p)|, which in fact corresponds
to the size of the difference between
D and D
restricted to tuples of peer p. This weighting function is
called cardinality-based in the following.
Example 1 (contd.) Consider the source databases
D
r
1
,
D
r
2
, and
D
r
3
such that:
D
r
1
(p
1
) =
D
r
2
(p
1
) =
D
r
3
(p
1
) =
D
r
(p
1
),
39
D
r
1
(p
2
) =
{s
2
(John, Mary)}, D
r
2
(p
2
) =
{s
2
(Mary, Tom)},
D
r
3
(p
2
) =
{}, D
r
1
(p
3
) =
{}, D
r
2
(p
3
) =
{s
3
(Mary, D1)}, and
D
r
3
(p
3
) =
{s
3
(Mary, D1)}.
Assume that, for each peer p, w
p
(P
r
,D
r
)
(
D) = |D(p) D
r
(p)|, i.e., she prefers source repairs where the minimum
number of tuples is deleted from
D
r
(p).
Then,
w
p
1
(P
r
,D
r
)
(
D
r
1
)
=
w
p
1
(P
r
,D
r
)
(
D
r
2
)
=
w
p
1
(P
r
,D
r
)
(
D
r
3
)
=
0;
w
p
2
(P
r
,D
r
)
(
D
r
1
) = w
p
2
(P
r
,D
r
)
(
D
r
2
) = 1; w
p
2
(P
r
,D
r
)
(
D
r
3
) = 2;
w
p
3
(P
r
,D
r
)
(
D
r
1
) = 1; w
p
3
(P
r
,D
r
)
(
D
r
2
) = w
p
3
(P
r
,D
r
)
(
D
r
3
) = 0.
P
The problem of solving inconsistency in "classical" data
integration systems has been traditionally faced by providing
a semantics in terms of the repairs of the global
databases that the mapping forces to be in the semantic
of the system [4, 7, 6]. Repairs are obtained by means of
addition and deletion of tuples according to some minimality
criterion.
We next propose a generalization of these approaches to
the P2P framework, which takes into account peers preferences
. To this aim, we focus on finding the proper set of facts
at the sources that imply as a consequence a global database
satisfying all integrity constraints. Basically, such a way of
proceeding allows us to easily take into account information
on preferences when trying to solve inconsistency, since repairing
is performed by directly focusing on those sources,
whose integration has caused inconsistency.
Definition 5 (Repair) Let
P be a P2P system, p a peer,
and
D and D two source databases. We say that D is p-minimal
if
D |= P, and there exists no source database D
such that w
p
(P,D)
(
D ) < w
p
(P,D)
(
D ) and D |= P.
Then,
D is a repair for P w.r.t. D if D is p-minimal for
each peer p.
P
Example 1 (contd.) It is easy to see that
D
r
1
,
D
r
2
, and
D
r
3
satisfy
P
r
and they are both p
1
-minimal. Indeed, peer
p
1
has no preferences among the three databases, since
w
p
1
(P
r
,D
r
)
(
D
r
1
) = w
p
1
(P
r
,D
r
)
(
D
r
2
) = w
p
1
(P
r
,D
r
)
(
D
r
3
) = 0.
Moreover,
D
r
1
and
D
r
2
are equally preferred by p
2
, whereas
D
r
2
and
D
r
3
are equally preferred by p
3
. Therefore, all peers
agree on
D
r
2
, which is thus a repair for
D
r
w.r.t.
P
r
. However
, neither
D
r
3
is p
2
-minimal, nor
D
r
1
is p
3
-minimal, and
thus they are not repairs.
P
We next define the semantics of a P2P system, in terms
of models for those sources on which all the peers agree.
Definition 6 (Agreement) Let
P = P, I, N , map be a
P2P system, and
D be an instance for P. The agreement for
P w.r.t. D is the set of all of its models w.r.t. some repair,
and will be denoted by Agr (
P, D).
P
Example 1 (contd.)
D
r
2
is p-minimal, for each peer p,
and it is easy to see that the set Agr (
P
r
, D
r
) contains
all databases belonging to some model for
P
r
w.r.t.
D
r
2
.
In particular,
it contains the supersets
(satisfying
the constraints)
of
the database
B
r
2
such that
B
r
2
(p
1
) =
{secretary (Albert, Bill)}, B
r
2
(p
2
) =
{financial(Mary, Tom)} and B
r
2
(p
3
) =
{boss(Albert, Bill),
boss (Mary, Tom), employee(Mary, D1)}. Moreover, no other
global database is in Agr (
P
r
, D
r
).
P
We can finally characterize the answer to a user query in
terms of the repairs for the system.
Definition 7 Let
P = P, I, N , map be a P2P system, let
D be a source database for it, and let Q be a query over
the schema of a peer p. Then, the answer to Q is the evaluation
of the query over all the possible agreed databases:
ans(Q, p, P, D) =
B Agr(P,D)
Q
B(p)
p
.
P
For instance, in our running example, the answer to the
user query
{X | boss(X, Y )} posed over peer p
3
, which asks
for all employees that have a boss, is
{ Albert , Mary },
since this query is evaluated over the supersets of the
database
B
r
2
retrieved from
D
r
2
only.
We conclude the section by noticing that Agr (
P, D) is just
a formal characterization of the semantics of a P2P system.
Usually, we are not interested in computing such a set; and,
in fact, for practical applications, suitable techniques and
optimization algorithms should be investigated to handle
inconsistency at query time (in the spirit of, e.g., [10]).
4.2
The Price of Autonomy
Given the framework presented so far, we are in the position
of studying the effects of having autonomous peers
repairing their source databases according to their own preferences
. We next show that, in some cases, peers might not
find an agreement on the way the repair has to be carried
out. This is a somehow expected consequence of having selfish
interested peers in the absence of a global coordination.
Proposition 8
There exists a P2P system
P and a source
database
D such that there is no agreement, i.e., Agr(P , D)
is empty.
Proof
[Sketch].
Consider the P2P system
P =
P , I , N , map , where P consists of the peers challenger
(short: c) and duplicator (short: d), that are mutually connected
, i.e.,
N (c) = {d} and N (d) = {c}.
Peer c is such that I (c) = G
c
, S
c
, M
c
, where the schema
G
c
consists of predicates r
c
(X) and mr
d
(X) with constraints
r
c
(X) r
c
(Y ) X = Y and r
c
(X) mr
d
(Y ) X = Y ; the
source schema consists of the relation symbol s
c
; and
M
c
contains only the assertion
{X | s
c
(X)}
{X | r
c
(X)}.
Peer d is such that I (d) = G
d
, S
d
, M
d
, where the schema
G
d
consists of predicates r
d
(X) and mr
c
(X) with constraints
r
d
(X) r
d
(Y ) X = Y and r
d
(X) mr
c
(Y ) X = Y ; the
source schema consists of the relation symbol s
d
; and
M
d
contains only the assertion
{X | s
d
(X)}
{X | r
d
(X)}.
Finally, map(c) contains the assertion {X | r
c
(X))}
{X | mr
c
(X)}, while map(d) contains the assertion {X |
r
d
(X))}
{X | mr
d
(X)}.
Let
D be a source database for P such that D(c) =
{s
c
(0), s
c
(1)
} and D(d) = {s
d
(0), s
d
(1)
}. We build four
source databases, say
D
1
,
D
2
,
D
3
and
D
4
, that satisfy
P. They are such that: D
1
(c) = {}, D
1
(d) = {s
d
(0)
};
D
2
(c) = {}, D
2
(d) = {s
d
(1)
}; D
3
(c) = {s
c
(0)
}, D
3
(d) = {};
D
4
(c) = {s
c
(1)
}, D
4
(d) = {}. Notice that all the other
databases satisfying
P are proper subsets of these ones.
Then, by assuming that each peer wants to minimize the
number of deletions in
D, there exists no source database
satisfying
P that is both c-minimal and d-minimal.
THE COMPLEXITY OF QUERY ANSWERING
In the light of Proposition 8, it is particulary relevant to
investigate the complexity of dealing with peer agreements
40
and query answering in such P2P data integration systems.
In this section, we first present some basic problems arising
in the proposed framework, and subsequently analyze their
computational complexity. This analysis is a fundamental
premise to devise effective and optimized implementations.
5.1
Problems
Given a P2P system
P and a source database D for P, we
consider the following problems:
RepairChecking: given a source instance D , is D a
repair for
P w.r.t. D?
AgreementExistence: is Agr(P, D) = ?
AnyAgreementComputation: compute a database B in
the agreement Agr (
P, D), if any.
QueryOutputTuple: given a query Q over a peer
schema
G
p
and a tuple t, is t ans(Q, p, P, D)?
Intuitively,
RepairChecking
is
the
very
basic
problem
of
assessing
whether
a
source
instance
at
hand
satisfies
the
data
integration
system.
Then,
AgreementExistence (and its corresponding computational
version
AnyAgreementComputation) asks for singling
out scenarios where some agrement can be in fact computed
. Finally,
QueryOutputTuple represents the problem
characterizing the intrinsic complexity of a query answering
in the proposed framework; indeed, it is the problem of
deciding the membership of a given tuple in the result of
query evaluation.
5.2
Results
Our first result is that checking whether all the peers are
satisfied by a given source database is a difficult task that
is unlikely to be feasible in polynomial time.
Theorem 9
RepairChecking is co-NP-complete. Hardness
holds even for cardinality-based weighting functions.
Proof [Sketch].
Membership. Consider the complementary
problem of deciding whether there exists a peer p
such that
D is not p-minimal. This problem is feasible
in NP by guessing a source database
D and checking in
that 1.
D |= P , and 2. there exists a peer p such that
w
p
(P,D)
(
D ) < w
p
(P,D)
(
D ). In particular, 1. is feasible in
polynomial time because of Theorem 4, and 2. is feasible in
polynomial time because our weighting functions are polynomially
computable.
Hardness. Recall that deciding whether a Boolean formula
in conjunctive normal form = C
1
. . . C
m
over the
variables X
1
, . . . , X
n
is not satisfiable, i.e., deciding whether
there exists no truth assignments to the variables making
each clause C
j
true, is a co-NP-hard problem.
We built a P2P system
P
such that:
P
contains a peer
x
i
for each variable X
i
, a peer c
j
for each clause C
j
, and
the distinguished peer e. The source schema of x
i
(resp. c
j
)
consists of the unary relation s
x
i
(resp. s
c
j
), whereas the
global schema consists of the unary relation r
x
i
(resp. r
c
j
).
The source schema of e consists of the unary relations s
e
and
s
a
, whereas its global schema consists of the unary relations
r
e
and r
a
. For each source relation, say s , P() contains
a local mapping assertion of the form
{X | s (X)}
{X |
r (X)}. Each global relation of the form r
x
i
is equipped
with the constraint r
x
i
(X
1
)
r
x
i
(X
2
)
X
1
= X
2
, stating
that each relation must contain one atom at most. Each
global relation of the form r
c
j
is equipped with the constraint
r
c
j
(tx
i
)
r
c
j
(fx
i
)
, where is the empty disjunction
, stating that for each variable x
i
, r
c
j
cannot contain
both tx
i
and fx
i
at the same time. Moreover, peer e
has also the constraint r
e
(X
1
)
r
a
(X
2
)
X
1
= X
2
.
Consider the source database
D
for
P
such that:
D
(x
i
)
=
{s
x
i
(tx
i
), s
x
i
(fx
i
)
}; for each x
i
occurring
in c
j
,
D
(c
j
)
=
{s
c
j
(tx
i
), s
c
j
(fx
i
)
}; and D
(e) =
{s
e
(t), s
e
(f), s
a
(t)}. Notice that due to the constraints issued
over peers schemas, any source database
D , with
D |= P
, is such that
|D (x
i
)
| 1, for each x
i
. Therefore
, the restriction of
D to the peers of the form x
i
is in
one-to-one correspondence with a truth-value assignment for
, denoted by (D ). Intuitively, the atom s
x
i
(tx
i
) (resp.
s
x
i
(fx
i
)) means that variable X
i
is set to true (resp. false),
whereas the atom s
c
j
(tx
i
) means that the clause C
j
is true,
witnessed by the assignment for the variable X
i
occurring
in c
j
.
Finally, the peers mapping assertions in
P
are defined
as follows.
For each variable X
i
occurring positively
(resp.
negatively) in the clause C
j
there are exactly
two mappings of the form
{r
x
i
(tx
i
)
}
{r
c
j
(tx
i
)
} and
{r
x
i
(fx
i
)
}
{r
c
j
(fx
i
)
} (resp. {r
x
i
(fx
i
)
}
{r
c
j
(tx
i
)
}
and
{r
x
i
(tx
i
)
}
{r
c
j
(fx
i
)
}); moreover, for each clause
C
j
containing variables X
j
1
, ..., X
j
k
, there exists a mapping
{r
c
j
(fx
j
1
)
r
c
j
(fx
j
k
)
}
{r
e
(f)}.
Figure 2 shows on the upper part the dependency graph
G(
P
) for the formula = (X
1
X
2
)
(X
3
)
(X
1
X
3
X
4
)
(X
4
)
(X
5
X
6
X
7
)
(X
4
X
6
X
8
).
Assume that each peer wants to minimize the number of
deletions in
D
. Then, given a source database
D minimal
w.r.t. each peer in
P
but e, we can show that the
above mappings encode an evaluation of the assignment
(D ). In particular, it is easy to see that (D ) is a satisfying
assignment for if and only if
D (e) contains the facts
{s
e
(t), s
a
(t)}, i.e., one fact is deleted from the source of e
only. Assume, now, that
D is such that D (e) = {s
e
(f)},
i.e., two facts are deleted from the source of e. Then, D is
also e-minimal if and only if is not satisfiable.
P
Given the above complexity result, one can easily see that
AnyAgreementComputation is feasible in the functional version
of
P
2
. Indeed, we can guess in NP a source instance
D, build in polynomial time a model B for P w.r.t. D (by
construction in Theorem 4), and check in co-NP that
D is
minimal for each peer.
Actually, we can do much better. In fact, we next show
that the problem is complete for the polynomial time closure
of NP, and thus remains at the first level of the polynomial
hierarchy.
Theorem 10
AnyAgreementComputation
is
FPNP-complete
.
Hardness
holds even for cardinality-based
weighting functions.
Proof [Sketch]. Membership. The problem can be solved
by processing peers in a sequential manner. For each peer in
P, we can find the minimum value of the associated preference
function by means of a binary search, in which at each
step we guess in NP a database instance and verify that
such a preference holds. After having collected the minimum
values for all peers, we conclude with a final guess to
get a repair
D, and a subsequent check that actually each
peer gets its minimum possible value for
P w.r.t. D.
41
Figure 2: Constructions in Proofs of Complexity Results
.
Finally, a model for
P w.r.t. D can be build in polynomial
time (again, by construction in Theorem 4).
Hardness. Let be a boolean formula in conjunctive normal
form = C
1
. . . C
m
over the variables X
1
, . . . , X
n
.
Assume that each clause, say C
j
, is equipped with a weight
w
j
(natural number).
Let be an assignment for the
variables in .
Its weight is the sum of the weights of
all the clauses satisfied in .
The problem of computing
the maximum weight over any truth assignment, called
MAX - WEIGHT - SAT, is FPNP-complete.
Consider again the construction in Theorem 9, and modify
P
as follows.
The source schema of peer e consists
of the relation s
w
, whereas its global schema consists
of the relations r
w
and r
v
, and of the constraint
r
v
(X) r
w
(X, Y ) . The local mappings of e is {X, Y |
s
w
(X, Y )}
{X, Y | r
w
(X, Y )}. Moreover, for each clause
c
j
over variables X
j
1
, ..., X
j
k
, map(e) contains the assertion
{r
c
j
(fx
j
1
)
r
c
j
(fx
j
k
)
}
{r
v
(fc
j
)
}. Let
P
be such
a modified P2P system. Notice that G(
P
) coincides with
G(
P
) (see again Figure 2).
Consider now the database instance
D
for
P
obtained
by modifying
D
such that
D
(e) contains the atoms
s
w
(fc
j
, 1), s
w
(fc
j
, 2), ...s
w
(fc
j
, w
j
) for each clause c
j
. Intuitively
, peer e stores w
j
distinct atoms for each clause c
j
.
Let
D be a source instance that satisfies
P
. As in Theorem
9, the restriction of
D over the variables is in one-to-one
correspondence with a truth assignment for , denoted
by (D ). Then, it is easy to see that peer e must delete
in
D all the w
j
distinct atoms corresponding to a clause
C
j
that is not satisfied by the assignment (D ). Therefore
,
|D (e)| =
i|C
i
is false in
(D )
w
i
. Hence, the result
easily follows, since computing the source instance that is e-minimal
, say
D, determines the maximum weight over any
assignment for as (
i
w
i
)
- |D(e)|.
P
We next focus on the
AgreementExistence problem. Note
that membership of this problem in
P
2
is easy to proven,
after the above theorem. However, the reduction for the
hardness part we shall exploit here is rather different.
Theorem 11
AgreementExistence is
P
2
-complete. Hardness
holds even for cardinality-based weighting functions.
Proof [Sketch]. Membership is shown with the same line
of reasoning of Theorem 10. For the hardness, consider again
MAX - WEIGHT - SAT, and the
P
2
-complete problem of deciding
whether it has a unique solution.
Let
P
be the P2P system built in Theorem 10, and let
P
be a copy of it, obtained by replacing each element
(both relations and peers) r in
P
by r . Then, consider the
system ~
P
obtained as the union of
P
,
P
and a fresh
peer u. Figure 2 shows the dependency graph G( ~
P
).
The local schema of u is empty, while its global schema
consists of the unary relation r
u
with the constraint
n
i=1
r
u
(bad
i
)
. The mapping assertions are as follows.
For each variable X
i
in , map(u) contains {r
x
i
(tx
i
)
r
x
i
(tx
i
)
}
{r
u
(bad
i
)
} and {r
x
i
(fx
i
)
r
x
i
(fx
i
)
}
{r
u
(bad
i
)
}. It is worthwhile noting that, for the sake of
simplicity, the mapping assertions are slightly more general
than those allowed in the usual definition of P2P systems,
since they involve joins among different peers. However, this
is only a syntactical facility, as such a mapping can be easily
simulated by introducing a suitable dummy peer.
The idea of the reduction is that, if the same assignment
that maximizes the weight of the satisfied clauses is selected
for both
P
and
P
, then r
u
(bad
i
) is pushed to u (for each
i), thereby violating the constraint. Thus, there is a (nonempty
) agreement in ~
P
if and only if there are at least two
such assignments.
P
We conclude our investigation by observing that query
answering is at least as hard as
AgreementExistence. Indeed
, intuitively, if peers are not able to find an agreement
in an inconsistent P2P system, then the answer to any given
query will be empty. Moreover, membership can be proven
by the same line of reasoning of Theorem 10, and we thus
get the following result.
Theorem 12
QueryOutputTuple is
P
2
-complete.
Hardness
holds even for cardinality-based weighting functions.
CONCLUSIONS
In this paper, we investigated some important theoretical
issues in P2P data integration systems. Specifically, we
introduced a setting in which peers take into account their
own preferences over data sources, in order to integrate data
if some inconsistency arise. This seems a natural setting for
such kind of systems, which has not been previously investigated
in the literature. It turns out that there are scenarios
where peers do not find any agreement on the way the repair
should be carried out, and where some kind of centralized
coordination is required.
Actually, our results show that this coordination comes
with a cost and some basic problems are unlikely to be
tractable. However, the complexity of the problems studied
in this paper are only mildly harder than the corresponding
problems in traditional data integration systems.
42
This is an important feature of our approach, that paves
the way for possible easy implementations, based on available
systems.
In particular, the prototypical implementation appears viable
with minor efforts if done on top of integration systems
that exploit a declarative approach to data integration (e.g.,
[18], where logic programs serve as executable logic specifications
for the repair computation). Indeed, our complexity
results show that logic engines able to express all problems
in the second level of the polynomial hierarchy, such as the
DLV system [19], suffices for managing the framework, once
we provide appropriate logic specifications.
A number of interesting research questions arise from this
work.
First, it is natural to ask whether the framework
can be extended to the presence of existentially quantified
constraints. This can be easily done for some special syntactic
fragments, such as for non key-conflicting schemas,
i.e., global schemas enriched with inclusion dependencies
and keys, for which decidability in the context of data integration
systems has been proven in [7]. To this aim, one has
to modify the algorithm in [9] to propagate information in a
P2P system by accounting for mapping assertion as well as
for inclusion dependencies, and eventually check that after
such propagation no key has been violated.
We conclude by noticing that an avenue of further research
is to consider more sophisticated peer-agreement semantics,
besides the Pareto-like approach described here.
For instance
, we may think of some applications where peers may
form cooperating groups, or do not cooperate at all. Another
line of research may lead to enrich the setting by further
kinds of peer preferences criteria, by replacing or complementing
the weighting functions proposed in this paper.
Acknowledgments
The work was partially supported by the European Commission
under project IST-2001-33570 INFOMIX.
Francesco Scarcello's work was also supported by ICAR-CNR
, Rende, Italy.
REFERENCES
[1] Serge Abiteboul, Richard Hull, and Victor Vianu.
Foundations of Databases. Addison Wesley Publ. Co.,
Reading, Massachussetts, 1995.
[2] Marcelo Arenas, Leopoldo E. Bertossi, and Jan
Chomicki. Consistent query answers in inconsistent
databases. In Proc. of PODS'99, pages 6879, 1999.
[3] P. Bernstein, F. Giunchiglia, A. Kementsietsidis,
J. Mylopoulos, L. Serafini, and I. Zaihrayeu. Data
management for peer-to-peer computing: A vision. In
Workshop on the Web and Databases, WebDB, 2002.
[4] Leopoldo Bertossi, Jan Chomicki, Alvaro Cortes, and
Claudio Gutierrez. Consistent answers from integrated
data sources. In Proc. of FQAS'02, pages 7185, 2002.
[5] Leopoldo E. Bertossi and Loreto Bravo. Query
answering in peer-to-peer data exchange systems. In
Proc. of EDBT Workshops 2004, pages 476485, 2004.
[6] Loreto Bravo and Leopoldo Bertossi. Logic
programming for consistently querying data
integration systems. In Proc. of IJCAI'03, pages
1015, 2003.
[7] Andrea Cal`i, Domenico Lembo, and Riccardo Rosati.
On the decidability and complexity of query
answering over inconsistent and incomplete databases.
In Proc. of PODS'03, pages 260271, 2003.
[8] Andrea Cal`i, Domenico Lembo, and Riccardo Rosati.
Query rewriting and answering under constraints in
data integration systems. In Proc. of IJCAI'03, pages
1621, 2003.
[9] Diego Calvanese, Giuseppe De Giacomo, Maurizio
Lenzerini, and Riccardo Rosati. Logical foundations of
peer-to-peer data integration. In Proc. of PODS'04,
pages 241251, 2004.
[10] Thomas Eiter, Michael Fink, Gianluigi Greco, and
Domenico Lembo. Efficient evaluation of logic
programs for querying data integration systems. In
Proc. of ICLP'03, pages 348364, 2003.
[11] Enrico Franconi, Gabriel Kuper, Andrei Lopatenko,
and Luciano Serafini. A robust logical and
computational characterisation of peer-to-peer
database systems. In Proc. of DBISP2P'03, pages
6476, 2003.
[12] Enrico Franconi, Gabriel Kuper, Andrei Lopatenko,
and Ilya Zaihrayeu. A distributed algorithm for robust
data sharing and updates in p2p database networks.
In Proc. of P2P&DB'04, pages 446455, 2004.
[13] Enrico Franconi, Gabriel Kuper, Andrei Lopatenko,
and Ilya Zaihrayeu. Queries and updates in the codb
peer to peer database system. In Proc. of VLDB'04,
pages 12771280, 2004.
[14] Gianluigi Greco, Sergio Greco, and Ester Zumpano. A
logic programming approach to the integration,
repairing and querying of inconsistent databases. In
Proc. of ICLP'01, pages 348364. Springer, 2001.
[15] Gianluigi Greco and Domenico Lembo. Data
integration with prefernces among sources. In Proc. of
ER'04, pages 231244, 2004.
[16] Alon Y. Halevy, Zachary G. Ives, Peter Mork, and
Igor Tatarinov. Piazza: data management
infrastructure for semantic web applications. In Proc.
of WWW'03, pages 556567, 2003.
[17] Maurizio Lenzerini. Quality-aware peer-to-peer data
integration. In Proc. of IQIS'04, 2004.
[18] Nicola Leone, Thomas Eiter, Wolfgang Faber, Michael
Fink, Georg Gottlob, Gianluigi Greco, Giovambattista
Ianni, Edyta Kalka, Domenico Lembo, Maurizio
Lenzerini, Vincenzino Lio, Bartosz Nowicki, Riccardo
Rosati, Marco Ruzzi, Witold Staniszkis, and Giorgio
Terracina. The INFOMIX system for advanced
integration of incomplete and inconsistent data. In
Proc. of SIGMOD'05, pages 915917, 2005.
[19] Nicola Leone, Gerald Pfeifer, Wolfgang Faber,
Thomas Eiter, Georg Gottlob, Simona Perri, and
Francesco Scarcello. The DLV System for Knowledge
Representation and Reasoning. ACM Transaction on
Cumputational Logic. To appear.
[20] Luciano Serafini, Fausto Giunchiglia, John
Mylopoulos, and Philip A. Bernstein. Local relational
model: A logical formalization of database
coordination. In Fourth International and
Interdisciplinary Conference on Modeling and Using
Context, CONTEXT 2003, pages 286299, 2003.
[21] Igor Tatarinov and Alon Halevy. Efficient query
reformulation in peer data management systems. In
Proc. of SIGMOD'04, pages 539550, 2004.
43
| Peer-to-Peer Systems;Data Integration Systems |
144 | On the Discovery of Significant Statistical Quantitative Rules | In this paper we study market share rules, rules that have a certain market share statistic associated with them. Such rules are particularly relevant for decision making from a business perspective. Motivated by market share rules, in this paper we consider statistical quantitative rules (SQ rules) that are quantitative rules in which the RHS can be any statistic that is computed for the segment satisfying the LHS of the rule. Building on prior work, we present a statistical approach for learning all significant SQ rules, i.e., SQ rules for which a desired statistic lies outside a confidence interval computed for this rule. In particular we show how resampling techniques can be effectively used to learn significant rules. Since our method considers the significance of a large number of rules in parallel, it is susceptible to learning a certain number of "false" rules. To address this, we present a technique that can determine the number of significant SQ rules that can be expected by chance alone, and suggest that this number can be used to determine a "false discovery rate" for the learning procedure. We apply our methods to online consumer purchase data and report the results. | INTRODUCTION
Rule discovery is widely used in data mining for learning
interesting patterns. Some of the early approaches for rule
learning were in the machine learning literature [11, 12, 21]. More
recently there have been many algorithms [1, 25, 28, 31] proposed
in the data mining literature, most of which are based on the
concept of association rules [1]. While all these various
approaches have been successfully used in many applications [8,
22, 24], there are still situations that these types of rules do not
capture. The problem studied in this paper is motivated by market
share rules, a specific type of rule that cannot be represented as
association rules. Informally, a market share rule is a rule that
specifies the market share of a product or a firm under some
conditions.
The results we report in this paper are from real user-level Web
browsing data provided to us by comScore Networks. The data
consists of browsing behavior of 100,000 users over 6 months. In
addition to customer specific attributes, two attributes in a
transaction that are used to compute the market share are the site
at which a purchase was made and the purchase amount. Consider
the example rules below that we discovered from the data:
(1) Household Size = 3
35K < Income < 50K
ISP =
Dialup
marketshare
Expedia
= 27.76%, support = 2.1%
(2) Region = North East
Household Size = 1
marketshare
Expedia
= 25.15%, support = 2.2%
(3) Education
=
College
Region = West
50 < Household
Eldest
Age
<
55
marketshare
Expedia
=
2.92%,
support=2.2%
(4) 18 < Household Eldest Age < 20
marketshare
Expedia
=
8.16%, support = 2.4%
The market share for a specific site, e.g. Expedia.com, is
computed as the dollar value of flight ticket purchases (satisfying
the LHS of the rule) made at Expedia.com, divided by the total
dollar value of all flight ticket purchases satisfying the LHS. The
discovered rules suggest that Expedia seems to be doing
particularly well among the single households in the North East
region (rule 2), while it cedes some market in the segment of
teenagers (rule 4). Rules such as these are particularly relevant for
business since they suggest natural actions that may be taken. For
example, it may be worth investigating the higher market share
segments to study if there is something particularly good that is
being done, which is not being done in the lower market share
segments.
More generally, "market share" is an example of a statistic that is
computed based on the segment satisfying the antecedent of the
rule. Besides market share, various other quantitative statistics on
the set of transactions satisfying the LHS of a rule can be
computed, including mean and variance of an attribute. Prior
work on learning quantitative association rules [2, 33] studied the
discovery of rules with statistics such as the mean, variance, or
minimum/maximum of a single attribute on the RHS of a rule. In
this paper we generalize the structure of the rules considered in
[2] to rules in which the RHS can be any quantitative statistic that
can be computed for the subset of data satisfying the LHS. This
statistic can even be computed based on multiple attributes. We
term such rules as statistical quantitative rules (SQ rules).
With respect to learning SQ rules from data, we formulate the
problem as learning significant SQ rules that have adequate
support. We define an SQ rule to be significant if the specific
statistic computed for the rule lies outside a certain confidence
interval. This confidence interval represents a range in which the
statistic can be expected by chance alone. This is an important
range to identify if the rules discovered are to be interpreted as
suggesting fundamental relationships between the LHS and the
market share. For example, by chance alone if it is highly likely
that the market share of Expedia is between 25% and 30% for any
subset of data, then it is not at all clear that the rule relating
income and Expedia's market share (rule 1 in the example) is
identifying a fundamental relationship between income and the
market share.
While prior work [6, 9] has used confidence intervals to identify
significant rules, most of these approaches are either parametric
or specific for binary data. Building on prior work in this paper
we present a statistical approach for learning significant SQ rules
that is entirely non-parametric. In particular we show how
resampling techniques, such as permutation, can be effectively
used to learn confidence intervals for rules. Based on these
confidence intervals, significant rules can be identified. However,
since our method considers the significance of a large number of
rules in parallel, for a given significance level it is susceptible to
learning a certain number of false rules. To address this we
present an intuitive resampling technique that can determine the
number of false rules, and argue that this number can be used to
determine a "false discovery rate" for the learning procedure. The
practical significance of this approach is that we learn significant
SQ rules from data and specify what the false discovery rate
exactly is.
The paper is organized as follows. We first define SQ rules in the
next section. Section 3 presents an algorithm for computing
confidence intervals and Section 4 presents an algorithm for
learning significant SQ rules. In Section 5 we explain how the
false discovery rate for our approach can be computed. We
present detailed experimental results on real web browsing data in
Section 6 followed by a literature review and conclusions.
STATISTICAL QUANTITATIVE RULES
In this section we define SQ rules and significant SQ rules. Let
A= {A
1
, A
2
,..., A
n
} be a set of attributes that will be used to
describe segments and B = {B
1
, B
2
,..., B
m
} be another set of
attributes that will be used to compute various statistics that
describe the segment. Let dom(A
i
) and dom(B
j
) represent the set
of values that can be taken by attribute A
i
and B
j
respectively, for
any A
i
A and B
j
B. Let D be a dataset of N transactions where
each transaction is of the form {A
1
= a
1
, A
2
= a
2
,..., A
n
= a
n
, B
1
=
b
1
, B
2
= b
2
,..., B
m
= b
m
} where a
i
dom(A
i
) and b
j
dom(B
j
). Let
an atomic condition be a proposition of the form value
1
A
i
value
2
for ordered attributes and A
i
= value for unordered
attributes where value, value
1
, value
2
belong to the finite set of
discrete values taken by A
i
in D. Finally, let an itemset represent a
conjunction of atomic conditions.
Definition 2.1 (SQ rule). Given (i) sets of attributes A and B, (ii)
a dataset D and (iii) a function f that computes a desired statistic
of interest on any subset of data, an SQ rule is a rule of the form:
X
f(D
X
) = statistic, support = sup
1
(2.1)
where X is an itemset involving attributes in A only, D
X
is the
subset of D satisfying X, the function f computes some statistic
from the values of the B attributes in the subset D
X
, and support is
the number of transactions in D satisfying X.
Note that the statistic on the RHS of the rule can be computed
using the values of multiple attributes. The following examples
are listed to demonstrate different types of rules that an SQ rule
can represent. For ease of exposition we use the name of the
desired statistic in the RHS instead of referring to it as f(D
X
).
1.
Quantitative association rules [2]:
population-subset
mean or variance values for the subset (2.2)
Quantitative association rules are a popular representation for
rules in the data mining literature in which the RHS of a rule
represents the mean or variance of some attribute. Example:
Education = graduate
Mean(purchase) = $15.00. (2.2) is a
special case of (2.1), where f(subset) is the mean of some attribute
B
j
in the subset of data.
2.
Market share rules:
Let {A
1
, A
2
,..., A
n
, MSV, P} be a set of attributes in a dataset D.
MSV (Market Share Variable) is a special categorical attribute for
which the market share values are to be computed. P is a special
continuous variable that is the basis for the market share
computation for MSV. For example, each transaction T
k
may
represent a book
2
purchased online. A
1
through A
n
may represent
attributes of the customer who makes the purchase, such as
income, region of residence and household size. For each
transaction, MSV is the variable indicating the online book retailer
where the purchase was made. dom(MSV) may be {Amazon,
Barnes&Noble, Ebay} and P is the price of the book purchased.
For a specific v
dom(MSV) a market share statistic can be
computed as described below. Market share rules have the
following form:
X
marketshare
v
= msh, support = sup
(2.3)
where X is an itemset consisting of attributes in {A
1
, A
2
,..., A
n
}
and marketshare
v
is a statistic that represents the market share of a
specific v
dom(MSV). This is computed as follows. Let D
X
represent the subset of transactions satisfying X and D
X, MSV=v
1
In association rules, support is the number of transactions satisfying both
LHS and RHS of a rule. In SQ rules, since the RHS is not an itemset, we
define support as the number of transactions satisfying the LHS of a rule
only.
2
The provider, comScore Networks categorizes each purchase into
categories such as "book", "travel" and "consumer electronics". Hence
we can generate datasets in which all transactions represent purchases in
a single category, and this helps in the generation of market share rules
representing specific categories.
375
Research Track Paper
represent the subset of transactions satisfying (X
MSV = v).
Then marketshare
v
is computed as sum(P, D
X, MSV=v
) / sum(P, D
X
),
where sum(P, D) is the sum of all the values of attribute P in the
transactions in D.
Market share rules naturally occur in various applications,
including online purchases at various Web sites, sales
applications, and knowledge management applications. The
examples presented in the introduction are real market share rules
discovered in online purchase data. The following additional
examples illustrate the versatility and usefulness of market share
rules.
Within a specific product category (e.g. shoes) Footlocker
sells competing brands of shoes. In their transaction data, the
brand of the shoe can be the MSV and the purchase price is
P.
Consider a dataset of patents associated with some area (e.g.
hard disks). Each record may consist of several attributes
describing a patent, including one attribute (MSV) which
represents the organization to which the patent belongs and
another attribute that is always 1 (representing P and
indicating a granted patent) in the data. For a specific
organization, e.g. IBM, market share rules will represent the
percentage of patents that belong to IBM under some
conditions involving other attributes of the patent.
Definition 2.1 differs from the definition of quantitative rule [2,
33] as follows. First, it is not limited to mean and variance
statistics and assumes a much broader class of statistics, including
the market share statistics. Second, unlike quantitative rules, the
statistic of interest in the RHS of a rule can be computed based on
multiple attributes.
Definition 2.2 (Significant SQ rule). For a given significance
level
(0, 1), let (stat
L
, stat
H
) be the (1
) confidence interval
for a desired statistic, where this confidence interval represents
the range in which the statistic can be expected by chance alone.
An SQ rule X
f(D
X
) = statistic, support = sup is significant if
statistic lies outside the range (stat
L
, stat
H
).
The main objective of this paper is to discover all significant SQ
rules. The first challenge in learning significant SQ rules is in
constructing a confidence interval for the desired statistic such
that this interval represents a range of values for the RHS statistic
that can be expected by chance alone. In the next section we
present an algorithm for learning these confidence intervals.
COMPUTING CONF INTERVALS
The first question that needs to be addressed is what is meant by
"a range for the statistic that can be expected by chance alone". In
this section we start by addressing this question and outline a
procedure by which such a range can be computed. Next we will
point out the computational challenge in implementing such a
procedure for learning these intervals for several SQ rules and
then outline three observations that will substantially help address
the computational problems. Based on these observations we
present a resampling-based algorithm for computing the
confidence intervals.
3.1
Motivation and outline of a procedure
For a given SQ rule, the desired confidence interval theoretically
represents the range in which the statistic can be expected when
there is no fundamental relationship between the LHS of the rule
and the statistic. More precisely, since the statistic is computed
from the values of the B attributes, the confidence interval
represents the range in which the statistic can be expected when
the A attributes are truly independent of the B attributes.
Without making any parametric distributional assumptions, such a
confidence interval can be generated using the classical nonparametric
technique of permutation. Indeed permutation-based
approaches have been commonly used to generate confidence
intervals in the statistics literature [16]. If R is the set of all
attributes in a dataset, the basic idea in permutation is to create
multiple datasets by randomly permuting the values of some
attributes R
i
R. Such a permutation would create a dataset in
which R
i
is independent of (R R
i
), but would maintain the
distributions of R
i
and (R R
i
) in the permutation dataset to be the
same as the distributions of these attributes in the original dataset.
Table 3.1 illustrates one example of a permutation dataset D
in
which the B attributes are randomly permuted. Since a desired
statistic can be computed on each permutation dataset, a
distribution for the statistic can be computed based on its values
from the multiple permutation datasets. A confidence interval can
then be computed from this distribution.
Table 3.1 Dataset permutation
Original dataset D:
Permutation dataset D
:
A
1
A
2
B
1
B
2
A
1
A
2
B
1
B
2
1 2 3 8
1 2 5 6
1 3 5 6
1 3 7 4
2 3 7 4
2 3 3 8
As mentioned above, this is a commonly used procedure in nonparametric
statistics. The reason this procedure makes sense is as
follows. Even if there is a relationship between the LHS of an SQ
rule and the statistic on the RHS, by holding the A attributes fixed
and randomly re-ordering the values of the B attributes the
relationship is destroyed and the A attributes and B attributes are
now independent of each other. Repeating this procedure many
times provides many datasets in which the A attributes and B
attributes are independent of each other, while maintaining the
distributions of the A and B attributes to be the same as their
distributions in the original dataset. The values for the statistic
computed from the many permutation datasets is used to construct
a distribution for the statistic that can be expected when the A
attributes are truly independent of the B attributes.
Specifically, for the same itemset X, compare the following two
SQ rules in D and D
,
D: X
f(
X
D
) = stat
D
, support = sup
D
(3.1)
D
: X f(
X
D ) = stat
D
, support = sup
D
(3.2)
First note that the supports of the rules are the same since the
number of records satisfying X in the permutation dataset is the
same as the original dataset. We will use this observation to build
a more efficient method for computing confidence intervals
376
Research Track Paper
shortly. A confidence interval for the rule in (3.1) can be
computed using the following nave procedure.
1.
Create permutation dataset D
from the original dataset D
and compute stat
D
(as mentioned earlier in Section 2, the
function f computes this number based on the records
satisfying X).
2.
Repeat step 1 N
perm
> 1000 times
3
, sort all the N
perm
stat
D
values in an ascending order (stat
D
-1
, stat
D
-2
,..., stat
D
-Nperm
)
and let the
/2
th
and (1
/2)
th
percentiles
4
from this list be
stat
D
-L
and stat
D
-H
. The N
perm
values computed above
represents a distribution for the statistic that can be expected
by chance alone, while the percentile values from this
distribution determine a specific confidence interval. (Below
we use the terms "distribution" and "confidence interval"
frequently.)
3.
The (1
) confidence interval for the SQ rule in Equation
(3.1) is (stat
D
-L
, stat
D
-H
).
3.2
Computational challenges and solutions
Computing these confidence intervals for multiple candidate SQ
rules creates several computational problems which we will
address in this section. For example, if we need to test 10,000
potential significant rules (which is a reasonably small number for
data mining tasks), then we would need to repeat the above steps
10,000 times, and this means generating permutation datasets
10,000
N
perm
> 10
7
times and to compute the desired statistic in
each permutation dataset.
The following observations substantially reduce the
computational complexity of the procedure.
1. Sampling can be used instead of creating permutation datasets.
For the SQ rule in Equation (3.1), computing stat
D
on a
permutation dataset is really equivalent to computing stat
D
based
on a random sample of sup
D
records in D. This is the case since
none of the A attributes play a role in the computation of the
statistic. Permuting the dataset, identifying the (sup
D
) records
where X holds, and then computing the statistic on this subset
achieves the same effect as picking a random sample of sup
D
records from D and computing the statistic on this random subset.
Hence to determine the confidence interval for the SQ rule in
Equation (3.1), instead of permuting the dataset N
perm
times, it is
enough to sample sup
D
records from D for N
perm
times.
2. Some of the candidate SQ rules have the same support values
as other rules. Based on this observation, confidence intervals for
two SQ rules with the same support can be approximated by the
same interval. This is the case since for a given rule the interval is
generated by sampling sup
D
records many times and if another
rule has support = sup
D
then the interval for that rule will be
similar if the same procedure is repeated (it will not be exactly the
3
N
perm
is typically a big number. If we let N
perm
= N!, which is the number
of all possible permutations, we will be implementing a Monte Carlo
test. On large datasets, such a test is impractical. For a statistic like
market share whose value is limited by 0 and 1, N
perm
> 1000 makes the
distribution precise to the third decimal place. In our experiments, N
perm
= 1999.
4
Since we do not have any prior assumption about the expected value of
the statistic we use a two sided p-value.
same because of randomization). Therefore, fewer confidence
intervals need to be generated.
3. It is adequate to generate a fixed number of intervals,
independent of the number of rules considered. We observe that
the interval for an SQ rule with support = sup
D
can be
approximated by an interval computed by sampling sup
E
records
where sup
E
is "reasonably close" to sup
D
. This is a heuristic that
we use to considerably reduce the complexity of the procedure.
Denote N
Rule
as the number of rules to be tested. If all rules have
different support values, we need to construct N
Rule
distributions.
Instead, we would construct a fixed number N
dist
distributions,
such that for rule "X
f(D
X
) = statistic, support = sup", statistic
is compared with the distribution that is constructed by sampling
the closest number of transactions to sup. This heuristic is more
meaningful when we consider support in terms of percentage of
transactions satisfying LHS of a rule, which is a number between
0 and 1.
3.3
Algorithm CIComp
Based on the above observations, we present in Figure 3.1
algorithm CIComp for constructing N
dist
distributions and
determining the (1
) confidence intervals for a given
significance level.
Input: dataset
D
with
N
transactions, the number of
distributions
N
dist
, the number of points in each
distribution
N
perm
, a function
f
that computes the desired
statistic, and significance level
.
Output:
N
dist
distributions and significance thresholds.
1
for (
dist
= 1;
dist
N
dist
;
dist
++ ) {
2
N
sample
=
dist
/
N
dist
N
;
3
for (
perm
= 1;
perm
<
N
perm
;
perm
++ ) {
4
S
=
N
sample
transactions from
D
sampled without
replacements
5
;
5
stat[
dist
][
perm
] =
f
(
S
);
6
}
7
sort(stat[
dist
]);
8
LowerCI[
dist
] = stat[
dist
][(
N
perm
+ 1)
/2];
9
UpperCI[
dist
] = stat[
dist
][(
N
perm
+ 1)
(1
/2)];
10 }
11 Output stat[][], LowerCI[], UpperCI[]
Figure 3.1 Algorithm CIComp
In the above algorithm, N
dist
, N
perm
, and
are user-defined
parameters.
is usually chosen to be 5%, 2.5% or 1%. For N
dist
and N
perm
, the larger they are, the more precise the distributions
will be. Let N = 1000, N
dist
= 100, N
perm
= 999, and
= 5%. We
use these numbers as an example to explain the algorithm. For
step 2, the first distribution corresponds to N
sample
= dist/N
dist
N
= 1/100
1000 = 10 transactions. Step 3 to 6 computes N
perm
=
999 statistics for 10 randomly sampled transactions from dataset
D. Then we sort these 999 statistics and pick
/2 and 1 /2
percentiles, which are the 25
th
and 975
th
numbers in the
distribution, as the lower and upper thresholds for the (1
)
confidence interval. Steps 2 through 9 are repeated N
dist
= 100
times to get the desired number of distributions and confidence
intervals.
5
If the sampling is done with replacement then the interval will be the
bootstrap confidence interval. The two intervals will essentially be the
same when the support of the itemset is small.
377
Research Track Paper
The computation complexity of the algorithm in Figure 3.1 is O(N
N
perm
N
dist
), whereas the complexity of nave method is O(N
N
perm
N
rule
). Note that N
dist
can be fixed to a reasonable small
number, e.g. 100, whereas N
rule
is the number of rules that are
being tested and can easily be orders of magnitude more than
N
dist
.
DISCOVERING SQ RULES
Given the distributions and confidence intervals, discovering all
significant statistical rules is straightforward. Algorithm
SigSQrules is presented in Figure 4.1.
Input: dataset
D
with
N
transactions, sets of attributes
A and B, N
dist
, stat[][], LowerCI[], and UpperCI[] from
algorithm
CIComp
, a function
f
that computes the desired
statistic, minimum support
minsup
and a large itemset
generation procedure
largeitemsets
.
Output: set of
Significant rules, sigrules.
1
L =
largeitemsets
(
D
,
A
,
minsup
) # generates large
itemsets involving attributes in A
2
sigrules = {}
3
forall (itemsets
x
L) {
4
x.stat
=
f
(
D
x
) // statistic computed on
transactions satisfying
x
5
dist
= round(
support(x)
/
N
N
dist
)
6
if
x
.stat
(LowerCI[
dist
], UpperCI[
dist
]) {
//
x
f
(
D
x
)
=
x
.
stat
is significant
7
x.pvalue
= 2
percentile of
x.stat
in
stat[
dist
][1..
N
perm
]
8
sigrules = sigrules
{
x
f
(
D
x
)
=
x
.
stat
,
support
=
support
(
x
)}
9
}
10 }
Figure 4.1 Algorithm SigSQrules
Given N
dist
distributions constructed from the algorithm CIComp,
we use the above algorithm to discover all significant SQ rules.
We continue to use the example N = 1000, N
dist
= 100, and N
perm
=
999 to describe the steps in Figure 4.1. Note that the attributes in
A represent the attributes in the dataset that are used to describe
segments for which statistics can be computed. Step 1 uses any
large itemset generation procedure in rule discovery literature to
generate all large itemsets involving attributes in A. The exact
procedure used will depend on whether the attributes in A are all
categorical or not. If they are, then Apriori algorithm can be used
to learn all large itemsets. If some of them are continuous then
other methods such as the ones described in [31] can be used.
Step 4 computes the statistic function for each large itemset, x. In
step 5, we find out which distribution is to be used for
significance test. For example, if support(x) = 23, then
support(x)/N
N
dist
= (23/1000)
100 = 2.3, and hence dist will
be round(2.3) = 2. We would compare x.stat with its
corresponding confidence interval (LowerCI[2], UpperCI[2]) in
step 6. If x.stat is outside of the confidence interval, the rule is
significant, and we use step 7 to calculate its 2-side p-value. If
x.stat is the qth percentile, the 2-side p-value is 2
min(q%, 1
q%). The p-value is not only a value to understand how
significant a rule is, but is also useful for determining the false
discovery rate in Section 5. Note that the confidence interval used
to test significance of a rule is approximate since we do not
compute this interval for the exact value of the support of this
rule. Instead we use the closest interval (which was pre-computed
as described in Section 3.2) corresponding to this support value.
In future research we will quantify the effects of this
approximation.
We would also like to point out that in many cases (see below) the
computation of the statistic can be done efficiently within the
itemset generation procedure (largeitems) itself. This can be used
to modify the algorithm to make it more efficient once a specific
itemset generation procedure is used. This is the case if the
function f that computes the statistic on transactions T
1
, T
2
,..., T
s
is a recursive function on s, that is,
f(T
1
, T
2
,..., T
s
) = g(f(T
1
, T
2
,..., T
s-1
), f(T
s
), s)
(4.1)
Many statistics, such as mean and market share, are recursive. For
example, Mean(T
1
, T
2
,..., T
s
) = [Mean(T
1
, T
2
,..., T
s 1
)
(s 1) +
Mean(T
s
)] / s.
In this section we presented an algorithm SigSQrules for
generating significant SQ rules. However, as mentioned in the
introduction, for any given level of significance for a rule, the fact
that thousands of rules are evaluated for their significance makes
it possible to discover a certain number of false rules. This is the
well known multiple hypothesis testing problem [4]. While it is
difficult to eliminate this problem, it is possible to quantify this
effect. In the next section we discuss the problem of false
discovery in detail and present an algorithm for determining the
false discovery rate associated with the discovery of significant
SQ rules.
FALSE DISCOVERY OF SQ RULES
As mentioned above, when multiple rules are tested in parallel for
significance, it is possible to learn a number of "false" rules by
chance alone. Indeed, this is a problem for many rule discovery
methods in the data mining literature. The false discovery rate
(FDR) is the expected percentage of false rules among all the
discovered rules. Prior work in statistics has taken two approaches
to deal with the multiple hypothesis testing problem [4, 17, 34].
One approach attempts to lower the false discovery rate by
adjusting the significance level at which each rule is tested. As we
will describe below, this approach is not suitable for data mining
since it will result in very few rules being discovered. The second
approach assumes that a given number of false discoveries should
be expected, and focuses on estimating what the false discovery
rate (FDR) exactly is. This is more useful for data mining, since it
permits the discovery of a reasonable number of rules, but at the
same time computes a FDR that can give users an idea of what
percentage of the discovered rules are spurious. In this section, we
first review key ideas related to the multiple hypotheses testing
problem and then present a nonparametric method to determine
false discovery rate for our procedure.
For significance tests for a single rule, the significance level
is
defined as the probability of discovering a significant rule when
the LHS and RHS of the rule are actually independent of each
other; in other words,
is the probability of a false (spurious)
discovery. For example, on a random dataset where all attributes
are independent, if we test 10,000 rules, then by definition of
,
we expect 10,000
5% = 500 false discoveries by pure chance
alone. When some of the attributes are dependent on each other,
as is the case for most datasets on which rule discovery methods
are used, the above approach cannot be used to get an expectation
for the number of false rules. In such cases, two approaches are
378
Research Track Paper
possible. In statistics, a measure called familywise error rate
(FWER) is defined as the probability of getting at least one false
rule output. Most conventional approaches in statistics that deals
with the multiple hypotheses testing problem use different
methods to control FWER by lowering significance level for
individual rule,
ind
. For example, Bonferroni-type procedures
would have
ind
=
/ the number of rules tested, which is 5% /
10,000 = 5
10
-6
. However, when the number of hypotheses
tested is large (as is the case in data mining algorithms), extreme
low
value, e.g. 5 10
-6
, will result in very few rules discovered.
The other type of approach, as taken recently in [4] estimates the
false discovery rate (FDR), the expectation of the proportion of
false discoveries in all discoveries.
Table 5.1 Confusion matrix of the number of rules
Non-Significant
Rules
Significant
Rules
LHS independent of RHS
a b
LHS dependent on RHS
c d
In Table 5.1, the number of rules tested is (a + b + c + d), out of
which (a + b) is the number of rules where the LHS of the rules is
truly independent of the RHS, and (c + d) is the number of rules
where there is a real relationship between the LHS and the RHS
of the rules. The columns determine how many tested rules are
output as significant or non-significant. The two terms FDR and
FWER can be defined precisely as FDR = Exp(b / b + d) and
FWER = Prob(b >0).
We adopt FDR estimation in this section because it effectively
estimates false discoveries without rejecting too many discovered
rules. However, the method proposed in the literature [4, 7, 35]
for FDR cannot be used for large scale rule discovery because of
the following two reasons: first, the assumption that statistics of
the rules tested are independent from each other (which some of
the approaches use) is not true. For example, rules A
1
= 1
Mean(D
A1 = 1
) and A
1
= 1
A
2
= 2
Mean(D
A1 = 1
A2 = 2
) are not
independent. In fact a large number of rules are related to each
other in rule discovery because their LHS share common
conditions and RHS come from the same attributes. Second,
methods in statistics draw conclusions based on the number of
rules tested (= a + b + c + d), however, as indicated in [25], a and
c are unknown values due to the filtering by support constraint.
Without making any assumptions, below we present another
permutation-based method to estimate the FDR for our procedure
for learning significant SQ rules.
Denote N
sig
(
) to be the number of significant rules discovered
from dataset D when the significant level =
. In Table 5.1,
N
sig
(
) = b + d. Similar to the procedure described in Section 3,
by keeping the values in attributes A intact and permuting the B
attributes, we get a permutation dataset D
. Since we remove any
relationship between A and B attributes by this procedure, all the
LHS and RHS statistic of each rule tested in D
are independent.
If we apply the significant rule discovery algorithm SigSQrules,
the number of significant rules discovered from D
when the
significant level =
will be one instance of false discovery, that
is, N
sig-perm
(
) = b. It is easy to see that by creating multiple
permutation datasets, we can estimate the expectation of the
number of false discoveries and thus compute a false discovery
rate FDR = Exp(N
sig-perm
(
)) / N
sig
(
). We will describe the steps
how FDR(
) can be estimated in detail in the Appendix.
In this section, we described the problem of multiple hypotheses
testing and pointed out that for any given significance level a
certain number of significant SQ rules will be discovered by
chance alone. We then described an intuitive permutation based
procedure to compute the false discovery rate. From a practical
point of view the procedure described above can be used in
conjunction with SigSQrules to discover a set of significant SQ
rules and provide a number representing the percentage of these
rules that are likely to be spurious.
EXPERIMENTS
In this section we present results from learning significant market
share rules, a specific type of SQ rules. We started with user-level
online purchase data gathered by comScore Networks, a market
data vendor. The data consist of 100,000 users' online browsing
and purchase behavior over a period of six months. The market
data vendor tracks all online purchases explicitly by parsing the
content of all pages delivered to each user. Starting from the raw
data we created a dataset of purchases where each transaction
represents a purchase made at an online retailer. Attributes of the
transaction include user demographics, the site at which the
purchase was made, the primary category (e.g. books, home
electronics etc) of the site, the product purchased and the price
paid for the product. Within a specific category, e.g. books,
significant market share rules would be particularly interesting to
discover. We selected many datasets with purchases belonging to
each specific category and applied our method to learn several
interesting significant market share rules. For space limitations we
do not present all the results, but report the results for learning
market share rules for the top three retailers in the online book
industry. Specifically the dataset consists of all transactions in
which a book was purchased at any site and we use the methods
presented in the paper to learn market share rules for the top 3
sites Amazon.com, Barnes&Noble and Ebay. The total number
of transactions was 26,439 records and we limit the rules to
having at most five items on the LHS of a rule.
6.1
Rule Examples
Among the most significant market share rules (as determined by
the p-values of these rules), we picked four rules to list that were
particularly interesting for each online retailer.
Amazon.com
(1) Education
=
High
School
marketshare
Amazon
=
42.72%,
support
=
20.7%, CI
=
(46.07%,
50.92%)
(2) Region
=
West
Household
Size
=
2
marketshare
Amazon
=
57.93%, support
=
7.9%, CI
=
(44.36%, 52.50%)
(3) Region
=
South
Household Size
=
4
marketshare
Amazon
=
38.54%, support
=
5.4%, CI
=
(43.76%, 53.39%)
(4) 35
<
Household
Eldest
Age
<
40
ISP
=
Broadband
marketshare
Amazon
=
60.52%,
support
=
4.3%,
CI
=
(42.88%, 53.99%)
Barnesandnoble.com
(1) Education
=
Graduate
Household
Size
=
2
marketshare
BN
=
13.12%,
support
=
6.0%,
CI
=
(16.81%, 25.68%)
(2) 50 < Household Eldest Age < 55
Income > 100K
marketshare
BN
=
30.28%,
support
=
4.2%,
CI
=
(16.05%, 26.79%)
(3) Region = South
Household Size = 3
Child = Yes
marketshare
BN
=
13.27%,
support
=
4.2%,
CI
=
(16.68%, 26.10%)
(4) Region = South
60 < Household Eldest Age < 65
marketshare
BN
=
39.84%,
support
=
2.8%,
CI
=
(15.55%, 27.10%)
379
Research Track Paper
Ebay.com
(1) Education = College
Region = South
marketshare
Ebay
=
8.28%,
support
=
6.9%,
CI
=
(11.70%, 17.71%)
(2) Education = College
Region = North Central
marketshare
Ebay
=
21.77%,
support
=
4.0%,
CI
=
(11.05%, 18.29%)
(3) Region
= South
Income > 100K
marketshare
Ebay
=
4.83%,
support
=
2.9%,
CI
=
(9.54%, 20.46%)
(4) 18 < Household Eldest Age < 20
marketshare
Ebay
=
27.50%, support = 2.8%,
CI
=
(10.12%, 19.70%)
Rule (4) for Amazon.com indicates that it is doing particularly
well in households with middle-aged heads that have broadband
access. The market share for Amazon.com in this segment lies
significantly outside the confidence interval computed for the
rule. On the other hand, rule (1) for Barnesandnoble.com shows
that they are doing poorly selling to a segment which perhaps
represents well educated couples. Given that this is a large
segment (support = 6%), this rule suggests that they could try and
examine why this is the case and how they can achieve greater
penetration in this segment. In Ebay's case, all four rules are very
interesting. Rule (4) indicates that they have high market share
among teenagers, while rule (3) describes a segment they clearly
have trouble penetrating. For many other categories too (travel
and home electronics in particular) the significant SQ rules that
we learned were highly interesting. As these examples suggest,
these rules can be insightful, identify interesting segments and
have significant business potential.
6.2
Varying support and significance levels
To test how the methods perform as the minimum support and
significance levels vary, for one site we generated significant SQ
rules for many values of the minimum support and significance
level parameters. Figures 6.1 and 6.2 show how the number of
significant rules and the false discovery rate vary with support.
As the minimum support threshold is lowered the number of
significant SQ rules discovered increases. However the FDR
increases as the support threshold is lowered, suggesting a
tradeoff between discovering many significant rules while
keeping the FDR low. A practical outcome is that it may be
desirable to have higher minimum supports (to keep FDR low),
but not too high that very few rules are discovered. Figures 6.3
and 6.4 illustrate a similar tradeoff for the significance level
parameter. As
decreases FDR is lower, but this results in fewer
number of significant rules being discovered. Again, the
implication is that it may be desirable to have a low
(to keep
FDR low) but not too low that very few rules are discovered.
0
500
1000
1500
2000
2500
0.0%
2.0%
4.0%
6.0%
8.0%
10.0%
support
#
of
s
i
gn
i
f
i
c
an
t
r
u
l
e
s
= 10%
= 5%
= 2.5%
= 1%
0.0%
5.0%
10.0%
15.0%
20.0%
25.0%
30.0%
35.0%
0.0%
2.0%
4.0%
6.0%
8.0%
10.0%
support
FD
R
= 10%
= 5%
= 2.5%
= 1%
Figure 6.1. Effect of support on # of rules
Figure 6.2. Effect of support on FDR
0.00%
5.00%
10.00%
15.00%
20.00%
25.00%
30.00%
0.00%
2.00%
4.00%
6.00%
8.00%
10.00%
significance level
FD
R
support = 1%
support = 2%
support = 5%
support = 10%
0
200
400
600
800
1000
1200
1400
0.0%
2.0%
4.0%
6.0%
8.0%
10.0%
significance level
# o
f
s
i
g
n
ificant r
u
le
s
support = 1%
support = 2%
support = 5%
support = 10%
Figure 6.3. Effect of
on FDR
Figure 6.4. Effect of
on # of rules
380
Research Track Paper
6.3
Summary results for online book retailers
Based on this general tradeoff we chose minimum support of 2%
and chose an
of 2.5% in order to report summary results for the
three sites. Table 6.1 summarizes the number of significant rules
discovered and the false discovery rates of the procedure. As the
values in the table and the examples above show, our procedure
can be used effectively to learn a good set of significant SQ rules
while keeping the false discovery rates reasonable.
Table 6.1 Summary of results
Web site
Significant Rules
False Discovery
Rate
Amazon 651
6.30%
Barnesandnoble 393
9.67%
Ebay 679 5.60%
In this section we first presented compelling examples of rules
discovered that illustrate the potential of learning significant
market share rules. We then examined how the number of
significant rules discovered and the false discovery rate changes
with the support and significance level (
) parameters. The results
of this analysis suggested a tradeoff between generating
significant rules and keeping the false discovery rate low. Based
on this tradeoff we identified a specific value of the support and
significance parameters and showed the number of rules
discovered for these values.
RELATED WORK
We compare our work with the literature based on three aspects:
rule structure, rule significance, and methodology.
Rule structure. Rule discovery methods on a quantitative dataset
can be traced back to [29], where rules of the form x
1
< A < x
2
y
1
< B < y
2
are discovered. [31] extends the structure to be
conjunctions of multiple conditions on both antecedent and
consequent of a rule, and proposes their discovery method based
on the Apriori algorithm [1]. Although rules in [31] are important,
partitions like y
1
< B < y
2
for continuous attributes on the RHS of
a rule only gives partial description of the subset satisfying the
LHS of the rule and partial descriptions sometimes are
misleading. Observing this problem, [2] introduces a new
structure where the consequent of a rule is Mean(D
X
) or
Variance(D
X
) to summarize the behavior of the subset satisfying
the antecedent. [33] further extends the form of the consequent of
the rule, such that it can be of Min(D
X
), Max(D
X
), or Sum(D
X
).
Our rule structure is based on prior work: the antecedent is
conjunctions of conditions, while the consequent can be any
aggregate function f on multiple attributes to describe the
behavior of the subset satisfying the antecedent.
Rule significance. Any combination of attributes with conditions
can potentially form a rule. Researchers use different
measurements, e.g. support and confidence, to select only
important rules from all possible rules. Based on the support and
confidence framework, many metrics have been developed, such
as gain [15], conviction [10], unexpectedness [27]. Although
these metrics can be generalized to rules where the antecedent and
consequent are both conjunctions of the form value
1
< Attribute <
value
2
for quantitative datasets, they are not applicable for rules
whose consequent is a function, such as Mean(D
X
), or in general,
f(D
X
). To solve this non-trivial problem, we use statistical
significance tests to evaluate rules, so that the consequent of a
rule is not expected by chance alone. In the data mining literature,
statistical significance tests are commonly used in many
applications. For example, chi-square (
2
) is a statistic to test
correlations between attributes in binary or categorical data, and it
has been applied to discover correlation rules [9], actionable rules
[23], and contrast sets [3, 32]. For sparse data, [35, 36] employ
Fisher's Exact Test to detect anomaly patterns for disease
outbreaks. As mentioned in Section 3, these two tests are special
cases of our significance test when we apply our significance
definition to categorical data. For quantitative rules in [2], the
authors use a standard Z-test to determine the significance of
inequality of means between a subset D
X
and its complement D
D
X
. [33] defines a new measurement, impact, to evaluate
quantitative rules, where impact can identify those groups that
contribute most to some outcome, such as profits or costs. For
areas other than rule discovery, standard Z-tests with log-linear
models is used in Exploratory Data Analysis for OLAP data cubes
[30]. Our significance test is different from the above primarily
because (i) our significance definition is applicable to any user-defined
aggregate function f(D
X
), and (ii) we using nonparametric
methods to construct distributions and confidence intervals, in
which f(D
X
) is expected from random effects alone.
Methodology. Nonparametric statistics is philosophically related
to data mining, in that both methods typically make no
assumptions on distributions of data or test statistics. Even with
known distribution of a statistic, nonparametric methods are
useful to estimate parameters of the distribution [13].
Nonparametric methods are widely used on testing models that
are built from data: as earliest in [18], the author uses
randomization tests to tackle a model overfitting problem; [20]
compares bootstrap and cross-validation for model accuracy
estimation; for decision trees, [14, 26] use permutation tests to
select attributes based on 2
2 contingency tables. Rule discovery
is to learn local features, which is inherently different from
models. Although we have seen methods using parametric
hypothesis testing approach to learning rules from dataset [5, 6],
no prior work has been found on discovering large number of
rules based on nonparametric significance tests.
The problem of multiple hypothesis testing/multiple comparison
is well known in rule discovery, a good review of which can be
found in [19]. On sparse binary data, [25] shows that with proper
support and confidence control, very few false rules will be
discovered. However, rule discovery on quantitative data faces
much more complicated challenges, and conventional p-value
adjustment methods cannot be directly applied. To solve this
problem, we employ false discovery rate [4] metric to estimate the
number of false rules discovered due to testing a large number of
rules. In data mining, FDR has been shown useful in [7, 36] for
categorical data with known number of hypotheses, and we
extend it to quantitative rules with resampling methods.
CONCLUSION
In this paper we defined a new category of rules, SQ rules, and
the significance of SQ rules, on quantitative data. Then we
presented a permutation-based algorithm for learning significant
SQ rules. Furthermore, we show how an explicit false discovery
rate can be estimated for our procedure, which makes the
approach useful from a practical perspective. We presented
experiments in which we discovered market share rules, a specific
381
Research Track Paper
type of SQ rules, in real online purchase datasets and
demonstrated that our approach can be used to learn interesting
rules from data.
We would also like to point out that it is possible to compute the
false discovery rate (FDR) for several possible significance levels
in an efficient manner (without creating permutation datasets for
each significance level). Although a detailed presentation of this
is beyond the scope of this paper, in the appendix we provide an
overview of how this can be done. One main advantage of being
able to do this is that significant SQ rules can be discovered at a
chosen significance level that is computed from some desired
FDR. Hence rather than just estimating FDR we may be able to
discover significant rules given a specific FDR. However this
needs to be studied in greater detail in future work.
REFERENCES
[1] Agrawal, R. and Srikant, R., Fast Algorithms for Mining
Association Rules, in Proceedings of the 20th International
Conference on Very Large Databases, Santiago, Chile, 1994.
[2] Aumann, Y. and Lindell, Y., A Statistical Theory for
Quantitative Association Rules, in Proceedings of The Fifth
ACM SIGKDD Int'l Conference on Knowledge Discovery
and Data Mining, pp. 261-270, San Diego, CA, 1999.
[3] Bay, S. D. and Pazzani, M. J., Detecting Change in
Categorical Data: Mining Contrast Sets, in Proceedings of
the Fifth ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, pp. 302 - 306, San
Diego, CA, 1999.
[4] Benjamini, Y. and Hochberg, Y., Controlling the False
Discovery Rate: A Practical and Powerful Approach to
Multiple Testing, Journal of Royal Statistical Society B, vol.
57, iss. 1, pp. 289-300, 1995.
[5] Bolton, R. and Adams, N., An Iterative Hypothesis-Testing
Strategy for Pattern Discovery, in Proceedings of the Ninth
ACM SIGKDD Int'l Conference on Knowledge Discovery
and Data Mining, pp. 49-58, Washington, DC, 2003.
[6] Bolton, R. J. and Hand, D. J., Significance Tests for Patterns
in Continuous Data, in Proceedings of the 2001 IEEE
International Conference on Data Mining, pp. 67-74, San
Jose, CA, 2001.
[7] Bolton, R. J., Hand, D. J., and Adams, N. M., Determining
Hit Rate in Pattern Search, in Pattern Detection and
Discovery, ESF Exploratory Workshop, pp. 36-48, London,
UK, 2002.
[8] Brijs, T., Swinnen, G., Vanhoof, K., and Wets, G., Using
Association Rules for Product Assortment: Decisions Case
Study, in Proceedings of the Fifth ACM SIGKDD
International Conference on Knowledge Discovery and Data
Mining, pp. 254-260, San Diego, CA, 1999.
[9] Brin, S., Motwani, R., and Silverstein, C., Beyond Market
Baskets: Generalizing Association Rules to Correlations, in
Proceedings of the ACM SIGMOD/PODS '97 Joint
Conference, pp. 265-276, Tucson, AZ, 1997.
[10] Brin, S., Motwani, R., Ullman, J. D., and Tsur, S., Dynamic
Itemset Counting and Implication Rules for Market Basket
Data, in Proceedings ACM SIGMOD International
Conference on Management of Data (SIGMOD'97), pp. 255-264
, Tucson, AZ, 1997.
[11] Clark, P. and Niblett, T., The Cn2 Induction Algorithm,
Machine Learning, vol. 3, pp. 261-283, 1989.
[12] Clearwater, S. and Provost, F., Rl4: A Tool for Knowledge-Based
Induction, in Procs. of the Second International IEEE
Conference on Tools for Artificial Intelligence, pp. 24-30,
1990.
[13] Efron, B. and Tibshirani, R. J., An Introduction to the
Bootstrap. New York, NY: Chapman & Hall, 1993.
[14] Frank, E. and Witten, I. H., Using a Permutation Test for
Attribute Selection in Decision Trees, in Proceedings of 15th
Int'l Conference on Machine Learning, pp. 152-160, 1998.
[15] Fukuda, T., Morimoto, Y., Morishita, S., and Tokuyama, T.,
Data Mining Using Two-Dimensional Optimized
Association Rules: Scheme, Algorithms and Visualization, in
Proceedings of the 1996 ACM SIGMOD International
Conference on Management of Data (SIGMOD'96), pp. 13-23
, Montreal, Quebec, Canada, 1996.
[16] Good, P., Permutation Tests: A Practical Guide to
Resampling Methods for Testing Hypotheses - 2nd Edition.
New York: Springer, 2000.
[17] Hsu, J. C., Multiple Comparisons - Theory and Methods.
London, UK: Chapman & Hall, 1996.
[18] Jensen, D., Knowledge Discovery through Induction with
Randomization Testing, in Proceedings of the 1991
Knowledge Discovery in Databases Workshop, pp. 148-159,
Menlo Park, 1991.
[19] Jensen, D. and Cohen, P. R., Multiple Comparisons in
Induction Algorithms, Machine Learning, vol. 38, pp. 309-338
, 2000.
[20] Kohavi, R., A Study of Cross-Validation and Bootstrap for
Accuracy Estimation and Model Selection, in Proceedings of
the Fourteenth International Joint Conference on Artificial
Intelligence, pp. 1137-1143, San Mateo, CA, 1995.
[21] Lee, Y., Buchanan, B. G., and Aronis, J. M., Knowledge-Based
Learning in Exploratory Science: Learning Rules to
Predict Rodent Carcinogenicity, Machine Learning, vol. 30,
pp. 217-240, 1998.
[22] Ling, C. X. and Li, C., Data Mining for Direct Marketing:
Problems and Solutions, in Proceedings of the Fourth
International Conference on Knowledge Discovery and Data
Mining, pp. 73-79, New York, NY, 1998.
[23] Liu, B., Hsu, W., and Ma, Y., Identifying Non-Actionable
Association Rules, in Proceedings of the Seventh ACM
SIGKDD International Conference on Knowledge Discovery
and Data Mining, pp. 329-334, San Francisco, CA, 2001.
[24] Mani, D. R., Drew, J., Betz, A., and Datta, P., Statistics and
Data Mining Techniques for Lifetime Value Modeling, in
Proceedings of the Fifth ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, pp.
94-103, San Diego, CA, 1999.
[25] Megiddo, N. and Srikant, R., Discovering Predictive
Association Rules, in Proceedings of the Fourth
International Conference on Knowledge Discovery and Data
Mining, pp. 274-278, New York, NY, 1998.
[26] Oates, T. and Jensen, D., Large Datasets Lead to Overly
Complex Models: An Explanation and a Solution, in
Proceedings of the Fourth International Conference on
Knowledge Discovery and Data Mining, pp. 294-298, Menlo
Park, CA, 1998.
[27] Padmanabhan, B. and Tuzhilin, A., A Belief-Driven Method
for Discovering Unexpected Patterns, in Proceedings of the
Fourth International Conference on Knowledge Discovery
and Data Mining, pp. 94-100, New York, NY, 1998.
382
Research Track Paper
[28] Padmanabhan, B. and Tuzhilin, A., Small Is Beautiful:
Discovering the Minimal Set of Unexpected Patterns, in
Proceedings of the Sixth ACM SIGKDD International
Conference on Knowledge Discovery & Data Mining, pp. 54-63
, Boston, MA, 2000.
[29] Piatesky-Shapiro, G., Discovery, Analysis, and Presentation
of Strong Rules, in Knowledge Discovery in Databases,
Piatesky-Shapiro, G. and Frawley, W. J., Eds. Menlo Park,
CA: AAAI/MIT Press, pp. 229-248, 1991.
[30] Sarawagi, S., Agrawal, R., and Megiddo, N., Discovery-Driven
Exploration of Olap Data Cubes, in Proceedings of
the Sixth International Conference on Extending Database
Technology (EDBT'98), pp. 168-182, Valencia, Spain, 1998.
[31] Srikant, R. and Agrawal, R., Mining Quantitative
Association Rules in Large Relational Tables, in
Proceedings of the 1996 ACM SIGMOD International
Conference on Management of Data, 1996.
[32] Webb, G., Butler, S., and Newlands, D., On Detecting
Differences between Groups, in Proceedings of the Ninth
ACM SIGKDD Int'l Conference on Knowledge Discovery
and Data Mining, pp. 256-265, Washington, DC, 2003.
[33] Webb, G. I., Discovering Associations with Numeric
Variables, in Proceedings of The Seventh ACM SIGKDD
International Conference on Knowledge Discovery and Data
Mining, San Francisco, CA, 2001.
[34] Westfall, P. H. and Young, S. S., Resampling-Based Multiple
Testing - Examples and Methods for P-Value Adjustment.
New York, NY: John Wiley & Sons, Inc, 1993.
[35] Wong, W.-K., Moore, A., Cooper, G., and Wagner, M.,
Rule-Based Anomaly Pattern Detection for Detecting
Disease Outbreaks, in Proceedings of the Eighteenth
National Conference on Artificial Intelligence (AAAI-2002),
Edmonton, Canada, 2002.
[36] Wong, W.-K., Moore, A., Cooper, G., and Wagner, M.,
Bayesian Network Anomaly Pattern Detection for Disease
Outbreaks, in Proceedings of the Twentieth International
Conference on Machine Learning (ICML-2003),
Washington, DC, 2003.
APPENDIX: Discovering false discovery rates
for multiple significance levels
Let us continue to use the example N
perm
= 999 and
= 5%. On
the dataset D, from the algorithm SigSQrules we generate
significant rules as well as each rule's p-value. Because there are
N
perm
values in each distribution, the smallest possible p-value
from the permutation tests is 1/(N
perm
+ 1) = 0.001, and all possible
p-values are S = { 1/(N
perm
+ 1) = 0.001, 2/(N
perm
+ 1) = 0.002, ...
= 0.05 }. Let N
sig
[
ind
] be the number of significant rules whose pvalue
is no larger than
ind
S. For example, if there are 50 rules
whose p-value = 0.001, and 30 rules whose p-value = 0.002, then
N
sig
[0.001] = 50 and N
sig
[0.002] = 50 + 30 = 80. Without further
permutation tests, with N
sig
[] we know how many rules will be
discovered if we lower the significance level from
to
ind
. For
example, if
ind
= 0.002, there are only N
sig
[0.002] = 80 rules
whose p-value is no larger than
ind
= 0.002, therefore we expect
to discover 80 rules.
Similarly, for each permutation dataset D
, at each significance
level
ind
<
we can compute the number of significant rules and
their p-values by applying SigSQrules only once. Note that all
discoveries from D
are false discoveries, because the
relationships between A and B are removed. Let N
sig-perm
[i][
ind
]
be the number of discoveries from permutation datasets D
[i]. For
example, N
sig-perm
[1][0.002] = 20 means we have 20 discoveries
from the permutation dataset D
[1] at
ind
= 0.002. We implement
this procedure on multiple permutation datasets, and Median(
N
sig-perm
[][
ind
]) is the estimate of false discoveries at each
significance level
ind
on permutation datasets. Therefore,
FDR(
ind
) = Median(N
sig-perm
[][
ind
]) / N
sig
[
ind
]. We use Median
to estimate the expectation, which conforms to nonparametric
statistical considerations (median is the best estimator for
expectation when the underlying distribution is unknown).
Empirically, we showed in Figure 6.3 that FDR(
ind
) is an
increasing function on
ind
. It means that by decreasing
ind
, we
can control FDR(
ind
) to a smaller value. We are not always
guaranteed, though, to be able to set an individual significance
level such that FDR < 5%. It is possible that even when we
decrease
ind
to a level that almost no rules are discovered, FDR
is still much larger than 5%. In other words, there are always a
large proportion of spurious rules discovered from some datasets.
For example, if attributes independent based on a test statistic,
then Median(N
sig-perm
[][
ind
])
N
sig
[
ind
] for all significance
levels, and FDR
1. We want to point out that this is a desirable
property of our method on controlling FDR, because there are
many real-world datasets whose attributes are truly independent
from each other. Traditional methods cannot estimate how many
rules should be discovered, but with our technique, we can draw
the conclusion that, there is no rule to be discovered because none
of the rules is better than chance. This nonparametric method to
estimate and control FDR is applicable to quantitative datasets
and broad types of rules.
383
Research Track Paper | nonparametric methods;resampling;Rule discovery;market share rules;statistical quantitative rules |
145 | Optimal transmission range for cluster-based wireless sensor networks with mixed communication modes | Prolonging the network lifetime is one of the most important designing objectives in wireless sensor networks (WSN). We consider a heterogeneous cluster-based WSN, which consists of two types of nodes: powerful cluster-heads and basic sensor nodes. All the nodes are randomly deployed in a specific area. To better balance the energy dissipation, we use a simple mixed communication modes where the sensor nodes can communicate with cluster-heads in either single-hop or multi-hop mode. Given the initial energy of the basic sensor nodes, we derive the optimal communication range and identify the optimal mixed communication mode to maximize the WSN's lifetime through optimizations. Moreover, we also extend our model from 2-D space to 3-D space. | INTRODUCTION
A
WIRELESS sensor network consists of a large amount
of sensor nodes, which have wireless communication
capability and some level of ability for signal processing.
Distributed wireless sensor networks enable a variety of applications
for sensing and controlling the physical world [1], [2].
One of the most important applications is the monitor of a
specific geographical area (e.g., to detect and monitor the
environmental changes in forests) by spreading a great number
of wireless sensor nodes across the area [3][6].
Because of the sensor nodes' self constraints (generally tiny
size, low-energy supply, weak computation ability, etc.), it
is challenging to develop a scalable, robust, and long-lived
wireless sensor network. Much research effort has focused on
this area which result in many new technologies and methods
to address these problems in recent years. The combination
of clustering and data-fusion is one of the most effective
approaches to construct the large-scale and energy-efficient
data gathering sensor networks [7][9].
In particular, the authors of [9] develop a distributed algorithm
called Low-Energy Adaptive Clustering Hierarchy
(LEACH) for homogeneous sensor networks where each sensor
elects itself as a cluster-head with some probability and
The research reported in this paper was supported in part by the U.S.
National Science Foundation CAREER Award under Grant ECS-0348694.
the cluster reconfiguration scheme is used to balance the
energy load. The LEACH allows only single-hop clusters
to be constructed. On the other hand, in [10] we proposed
the similar clustering algorithms where sensors communicate
with their cluster-heads in multi-hop mode. However, in these
homogeneous sensor networks, the requirement that every
node is capable of aggregating data leads to the extra hardware
cost for all the nodes. Instead of using homogeneous sensor
nodes and the cluster reconfiguration scheme, the authors
of [11] focus on the heterogeneous sensor networks in which
there are two types of nodes: supernodes and basic sensor
nodes. The supernodes act as the cluster-heads. The basic
sensor nodes communicate with their closest cluster-heads via
multi-hop mode. The authors of [11] formulate an optimization
problem to minimize the network cost which depends on the
nodes' densities and initial energies.
In addition, The authors of [12] obtain the upper bounds
on the lifetime of a sensor network through all possible
routes/ communication modes. However, it is complicated to
implement a distributed scheme to achieve the upper bound
of the WSN lifetime because it is required to know the
distance between every two sensor nodes in their scheme. The
authors of [13] develop a simpler, but sub-optimal, scheme
where the nodes employ the mixed communication modes:
single-hop mode and multi-hop mode periodically. This mixed
communication modes can better balance the energy load
efficiently over WSNs. However, the authors of [13] do not
obtain the optimal communication range for the multi-hop
mode which is a critically important parameter for the mixed
communication modes scheme. Also, their analytical model
can only deal with the case of grid deployment, where the
nodes are placed along the grids, without considering the
random deployment.
In order to further increase the network lifetime of heterogeneous
WSNs by remedying the deficiencies in the aforementioned
pervious work, we develop the analytical models to
determine the optimal transmission range of the sensor nodes
and identify the optimal communication modes in this paper.
In our models, the basic sensor nodes are allowed to communicate
with their cluster-heads with mixed communication
Proceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM'06)
0-7695-2593-8/06 $20.00 2006
IEEE
Cluster-head
Basic sensor node
Zone1
Zone 2
Zone 3
Zone 4
R
Mobile base station
Cluster
Fig. 1.
An example of the wireless sensor data-gathering networks. In each
round, the aircraft hovers above the cluster-heads in the monitored area to
collect the aggregated data. Within each cluster, the basic sensor nodes can
communicate with its cluster-head in either single-hop or multi-hop.
modes instead of only multi-hop mode or only single-hop
mode. Applying the derived optimal transmission range and
communication modes, we also study how the other WSN
parameters (e.g., the density and initial energy of the cluster-heads
, etc.) affect the network lifetime through simulation
experiments. Moreover, simulation results verify our analytical
models. We also extend our model from 2-D space to 3-D
space.
The rest of the paper is organized as follows. Section II
develops our proposed models and formulates the design
procedure as an optimization problem. Section III solves the
formulated optimization problem. Section IV presents the
numerical and experimental results. Section V addresses the
extended 3-D model. The paper concludes with Section VI.
SYSTEM MODEL
We study the following WSN scenario in this paper. A
heterogeneous sensor network consisting of two types of
sensor nodes, i) the more powerful but more expensive cluster-head
nodes with density of
1
and ii) the inexpensive basic
sensor nodes with density of
2
, is deployed in a specific
area. The density of the basic sensor nodes is determined by
the application requirements. A basic sensor node joins the
cluster whose cluster-head is the closest in hops or distance to
this basic sensor node. In each round, the cluster-heads send
the aggregated data to the mobile base station (e.g., an aircraft
or a satellite) after the cluster-heads receive and process the
raw data from the basic sensor node. Fig. 1 shows an example
of this type of sensor networks.
The definition of network lifetime used in this paper is
the period in rounds from the time when the network starts
working to the time when the first node dies [15]. Notice that
energy dissipation is not uniform over the cluster-based WSNs,
implying that some nodes in specific zones (e.g., the nodes
which are close to the cluster-heads need to consume more
energy for relaying traffic of other nodes in multi-hop mode)
drain out their energy faster than others. Thus, the network
lifetime of these critical nodes decides how long the WSN can
survive. Hence, maximizing the network lifetime is equivalent
to minimizing the energy consumption of the critical sensor
nodes if the initial energy of sensor nodes is given. In this
paper, our optimization objective is not to minimize the total
energy consumption by all the sensor nodes, but to minimize
the energy consumption of the critical nodes to prolong the
network lifetime.
A. Node Architectures and Energy Models
A wireless sensor node typically consists of the following
three parts: 1) the sensor component, 2) the transceiver component
, and 3) the signal processing component. In this paper,
we have the following assumptions for each component.
1) For the sensor component:
Assume that the sensor nodes sense constant amount
of information every round. The energy consumed in
sensing is simply
E
sense
(l) = l, where is the power
consumption for sensing a bit of data and
l is the length
in bits of the information which a sensor node should
sense in every data-gathering round. In general the value
of
l is a constant.
2) For the transceiver component:
We use a simple model for the radio hardware energy
consumption [16]. The energy
E
tx
(r, l) and the energy
E
rx
(l) required for a node to transmit and receive a
packet of
l bits over r distance, respectively, can be
expressed as follows.
E
tx
(r, l) = (r
n
+ )l
E
rx
(l)
= l
(1)
where
r
n
accounts for the radiated power necessary
to transmit over a distance of
r between source and
destination and
is the energy dissipated in the transmitter
circuit (PLLs, VCOs, etc) which depends on
the digital coding, modulation, etc. The value of path
loss exponent
n depends on the surrounding environment
[16]. In general,
= 10pJ/bit/m
2
when
n = 2,
= 0.0013pJ/bit/m
4
when
n = 4, = 50nJ/bit [9].
3) For the signal processing component:
This component conducts data-fusion. Because the signal
processing component usually consists of complicated
and expensive gear such as Digital Signal Proces-sors
(DSP), Field Programmable Gate Arrays (FPGA),
etc., the basic sensor nodes do not contain the signal
processing component. Only the cluster-heads have these
components and the ability for data-fusion. The energy
spent in aggregating
k streams of l bits raw information
into a single stream is determined by
E
aggr
(k, l) = kl
(2)
where the typical value of
is 5nJ/bit/stream [17].
B. Mixed Communication Modes
Notice that in the multi-hop mode, the closer the distance
between the sensor node and its cluster-head, the more energy
the sensor node consumes since the inner nodes are required
Proceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM'06)
0-7695-2593-8/06 $20.00 2006
IEEE
to relay the more traffics than that for the outer nodes. On
the other hand, for the case of pure single-hop mode, the
basic sensor nodes which are closer to their cluster-heads
dissipate less energy than those farther from the cluster-heads
because the energy consumption increases as the
n-th power
of distance, where
n is the path loss exponent.
In order to balance the energy load of the basic sensor
nodes, our model employs the mixed communication modes
which consist of the mixed single-hop and multi-hop mode.
The basic sensor nodes use single-hop communication mode in
some rounds but multi-hop communication mode in the other
rounds. This kind of mixed communication modes is easy
to implement. For example, the cluster-head can broadcast a
notifying message periodically to all member nodes to inform
of which communication mode should be used for next data-gathering
round. We use parameter
to measure how often
the single-hop mode is used.
Suppose
T is the total rounds that the network can perform,
T
s
is the number of rounds that single-hop communication
mode is used and
T
m
is the number of rounds that multi-hop
communication mode is employed. Let
= T
s
/T = 1-T
m
/T
be the frequency with which the single-hop communication
mode is used. Note that
= 0 means that the pure multi-hop
communication mode is employed, while
= 1 represents that
only the single-hop communication mode is used.
C. Deployment Models
The sensor nodes and the cluster-heads are randomly distributed
on a 2-D circular area, whose radius is
A unit. We
can model such random deployment (e.g., deployed by the
aircraft in a large-scale mode) as a spatial Poisson point
process. Specifically, the cluster-heads and basic sensor nodes
in the wireless sensor network are distributed according to
two independent spatial Poisson processes PP1 and PP2 with
densities equal to
1
and
2
, respectively.
The basic sensor nodes will join the clusters in which
the cluster-heads are the closest to the sensor nodes to form
Voronoi cells [14]. The 2-D plane is thus partitioned into a
number of Voronoi cells which correspond to a PP1 process
point. The authors of [14] have studied the moments and tail
of the distributions of the number of PP2 nodes (i.e., the basic
sensor nodes) connected to a particular PP1 node (i.e, a cluster-head
). Because PP1 and PP2 are homogeneous Poisson point
processes, we can shift the origin to one of the PP1 nodes.
Let
V be the set of nodes which belong to the Voronoi cell
corresponding to a PP1 node located at the origin and
S
(r,)
be a PP2 node whose polar coordinate is (
r, ). By using the
results of [14], we can get the probability that
S
(r,)
belongs
to
V as follows:
P r S
(r,)
V = e
1
r
2
(3)
Let
N
V
be the number of PP2 nodes belonging to the
V.
Then, the average
N
V
can be given by
E[N
V
] =
2
0
0
P r S
(r,)
V
2
rdrd
=
2
0
0
e
1
r
2
2
rdrd
=
2
1
(4)
where
2
rdrd denotes the number of PP2 nodes in a small
area of
rdrd.
If we confine the cluster size to
X hops (i.e., maximum
number of
X hops is allowed from the basic sensor node
to its cluster-head), the average number of sensors which do
not belong to any cluster-head, denoted by
E[N
O
], can be
expressed as follows:
E[N
O
] =
1
A
2
2
0
XR
P r S
(r,)
V
2
rdrd
=
2
A
2
e
1
(XR)
2
(5)
Clearly, we want a small
E[N
O
] with an appropriate cluster
size
X.
E [N
O
]
2
A
2
(6)
where (
> 0) is the percentage of sensor nodes that will not
join any cluster-head.
Solving Eq. (5) and Eq. (6) together, we obtain the following
inequality.
X
- log
1
R
2
(7)
The average number of sensor nodes which do not join any
cluster-head is less or equal than
2
A
2
if
X is given by
X =
- log
1
R
2
(8)
Thus, we know that every cluster can be divided into
X
ring zones with the ring width equal to
R when multi-hop
communication mode is used.
The average number of sensor nodes in the
i-th zone of the
cluster can be written as follows:
E[N
i
] =
2
0
iR
(i-1)R
P r S
(r,)
V
2
rdrd
=
2
1
e
1
[(i-1)R]
2
- e
1
(iR)
2
(9)
where
i is an positive integer ranging from 1 to X.
D. The Wireless Sensor Network Lifetime
In our model, the nodes at the same zone will die at almost
the same time. Therefore, the network lifetime is equivalent
to the period from the time when the WSN begins working to
the time when all nodes of a zone die.
i) The basic sensor nodes
The sensor nodes have the responsibility to relay the traffic
from the peers laid in the outer zone in the multi-hop communication
mode. We define the average number of packets
Proceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM'06)
0-7695-2593-8/06 $20.00 2006
IEEE
by
Y(i), which a sensor node placed in the i-th zone need
to relay. Because each basic sensor node sends out a packet
of sensed information per round,
Y(i) is determined by the
average number of nodes for which node
i needs to forward
messages and can be written as follows:
Y(i) =
X
j=i+1
E[N
j
]
E[N
i
]
.
(10)
The energy consumption of the basic sensor nodes in the
i-th zone can be written as the summation of the following 3
terms:
E
sensor
(i, ) = (1 - ) E
tx
(R, l) + Y(i) E
tx
(R, l)
+E
rx
(l) + E
tx
(iR, l) + E
sense
(l)
(11)
where
(0 1) is a parameter measuring the frequency
with which the single-hop communication mode will
be employed. The first term of Eq. (11) represents the energy
spent in the relaying the traffic for the sensor nodes in the
outer zone and transmitting its own traffic when the multihop
communication mode is employed. The second term of
Eq. (11) is energy consumption for transmitting a packet by
single-hop. The third term of Eq. (11) is the energy dissipation
for sensing.
In our model, because the basic sensor nodes in the same
zone will consume almost the same energy in each round
(i.e., sharing the same relaying traffic load when multi-hop
communication mode is used), their lifetimes are equal if
their initial energies are the same. Therefore, if the initial
energy is the same for each basic sensor node, the sensor
nodes which belong to the (arg max
1iX
{E
sensor
(i, )})-th
zone will cost the most energy and die first which decides the
network lifetime. Given the initial energy, denoted by
E
init2
,
which is carried by the basic sensor node, the network lifetime
in rounds (
T ) can be written as follows:
T =
E
init2
max
1iX
E
sensor
(i, ) =
E
init2
E
max
()
(12)
where
E
max
()
max
1iX
E
sensor
(i, ). One of our objectives
is to find the optimal
which is determined by
= arg min
01
{E
max
()}
(13)
ii) The cluster-heads
Because the main functions of cluster-heads include (1)
sensing, (2) collecting data from the basic sensor nodes, (3)
aggregating the raw data, and (4) transferring the processed
data to the base station, the energy consumption of cluster-heads
is the sum of the energy dissipation of these four parts
for the above four parts. Therefore, the energy consumption
for a cluster-head, denoted by
E
CH
, in each round can be
expressed as summation of the following 4 terms:
E
CH
= E [N ] E
rx
(l) + E
tx
(H, l ) + E
sense
(l) +
E
aggr
(E [N ] + 1, l)
(14)
where
E[N ] is the average number of basic sensor nodes in
a cluster, and
H is the distance between the cluster-head and
the mobile base-station.
The first term of Eq. (14) represents the energy consumed
for receiving the packets from the basic sensor nodes. The
second term of Eq. (14) is the energy spent in transmitting the
aggregated information to the mobile base station. The third
term of Eq. (14) denotes the energy dissipation for sensing and
the fourth term of Eq. (14) means the energy consumption for
aggregating (
E [N ] + 1) packets, each with l bits, into one
packet of
l bits.
Because
E[N ] =
2
/
1
,
E
CH
depends on the ratio between
2
and
1
. The energy consumption of cluster-heads
is reversely proportional to the density of cluster-heads. The
larger the density of cluster-heads, the smaller the value of
E
CH
. Thus, in order to ensure
T
0
rounds network lifetime, the
initial energy for the cluster-heads can be written as follows:
E
init1
T
0
E
CH
(15)
E. Connectivity
Because the mixed communication modes contains multihop
communication mode, the communication range (
R) is
required to be large enough to ensure the connectivity of the
network. When the nodes are assumed to be distributed with
Poisson density
in a disc of a unit area, the authors of [18]
derived a lower bound on the communication range (
R) to
ensure the network connectivity with probability
P r{conn},
which is determined by
P r{conn} 1 - e
-R
2
(16)
Hence, the following inequality need to hold to ensure the
connectivity.
e
-R
2
(17)
where
> 0.
By resolving Eq. (17), we obtain the minimum transmission
range, denoted by
R
min
, to ensure that the probability of
connectivity is greater than (1
- ).
R
min
=
- 1
log
=
1
(
1
+
2
) log
(
1
+
2
)
(18)
F. The Optimization Problem Formulation
Our objective is to find the optimal transmission range (
R)
and the parameter for the mixed communication modes (
) to
maximize the network lifetime.
Objective:
max{T }
(19)
Subject to
R R
min
(20)
0 1
(21)
0 E
init2
E
0
(22)
where the first constraint given by Eq. (20) is to ensure the
connectivity of the network. The expression of
R
min
depends
Proceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM'06)
0-7695-2593-8/06 $20.00 2006
IEEE
on the types of deployment. The second constraint given by
Eq. (21) indicates that we can use mixed communication
modes. The third constraint given by Eq. (22) is due to the
very limited energy carried by the basic sensor nodes.
By observing Eq. (12), we can simplify the objective
function as "min
{E
max
}" and remove the constraint given
by Eq. (22) by letting
E
init2
= E
0
. Then, the simplified
optimization problem can be written as follows:
Objective:
min{E
max
}
(23)
Subject to
R R
min
0 1
SOLUTIONS FOR THE OPTIMIZATION PROBLEM
First, we show that given a specified transmission range
(
R), the function of E
max
() is convex in and the optimal
= arg min
01
{E
max
()} is a function of R.
Because the energy consumption
E
sensor
(i, ) of the sensor
nodes in the
i-th zone is a convex function of i (the proof
is omitted for lack of space), the value of
E
max
() will be
achieved by the sensor nodes laid in either the 1st zone or the
X-th zone, i.e.,
E
max
() = max {E
sensor
(1, ), E
sensor
(X, )}
(24)
Note that
E
sensor
(1, ) is a monotonically decreasing linear
function of
while E
sensor
(X, ) is a monotonically increasing
linear function of
, and E
sensor
(1, 1) = E
sensor
(X, 0).
Hence, we can find that the two lines corresponding to these
two linear functions will intersect at the point where
is
within the range between 0 to 1. Clearly, the intersecting
point, denoted by
, yields the minimum value of
E
max
().
Therefore,
=
is the optimal value when the following
equation is satisfied.
E
sensor
(1,
) = E
sensor
(X,
)
(25)
By resolving Eq. (25), we obtain the solution for Eq. (13).
= arg min
01
{E
max
()}
=
Y(1)[E
tx
(R) + E
rx
]
Y(1)[E
tx
(R) + E
rx
] + E
tx
(XR) - E
tx
(R)
=
Y(1)(R
n
+ 2)
Y(1)(R
n
+ 2) + R
n
(X
n
- 1)
=
Y(1)(R
n
+ 2)
Y(1)(R
n
+ 2) + R
n
(X
n
- 1)
(26)
where
= /, and we use E
tx
(R), E
tx
(XR), and E
rx
instead of
E
tx
(R, l), E
tx
(XR, l), and E
rx
(l) since the value
of
l is a constant for a specific application.
The factor
measures to what extent R has the impact
on the transmission energy consumption. For example, The
transmission energy consumption is more sensitive to
R when
is larger. The factor also determines the cost of relaying
traffic. The cost of relaying traffic increases with the increment
of
because receiving packets consumes more energy with a
greater
.
10
-5
10
0
10
5
10
4
10
3
10
2
10
1
10
-1
10
-2
10
-3
10
-4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
*
R=0.1, n=2
R=1, n=2
R=0.1, n=4
R=1, n=4
Fig. 2.
The optimal
against
.
If
R
n
, Eq. (26) reduces to its approximated expression
as follows:
Y(1)
Y(1) + (X
n
- 1)
(27)
By substituting Eq. (9) and (10) into Eq. (26), we have
=
1-e
-1(X2-1)R2
e
1R2
-1
(R
n
+ 2)
1-e
-1(X2-1)R2
e
1R2
-1
(R
n
+ 2) + R
n
(X
n
- 1)
(28)
Notice that
is a function of
R if we substitute Eq. (8) into
Eq. (28).
Let
1
= 0.001,
2
= 3, and = 10
-12
. By using Eq. (28),
we plot the optimal
against
as shown in Fig. 2. We
observe from Fig. 2 that the optimal
is almost 0 (i.e., pure
multi-hop communication mode) if
n = 4 and is small. The
reason is because the energy consumption for transmission is
proportional to
R
4
and the term of
R
4
in the first part of
Eq. (1) dominates the transmission energy consumption if
is small. Thus, the energy consumption in single-hop mode is
much more than that in multi-hop mode. In contrast, if
is
large, the multi-hop mode loses its advantage over the single-hop
mode because the transmission energy consumption is
dominated by the constant term of
in the first part of Eq. (1)
and it is not sensitive to the transmission range.
So far, given the communication range (
R), we obtain the
minimum
E
max
(
) for the basic sensor nodes in order to
derive the solutions for objective function of Eq. (24) as
follows:
E
max
(
) = [R
n
(1 +
X
n
) + + ]l
(29)
Next, we want to identify the optimal
R
to minimize
E
max
(
) when constraint given by Eq. (20) applies. Because
it is difficult to find the closed form for the optimal
R
, we
use a numerical solutions to determine the optimal
R
which
is detailed for some scenarios in Section IV.
THE NUMERICAL AND SIMULATION EVALUATIONS
For the following discussions, we set
= 10pJ/bit/m
2
,
= 50nJ/bit, = 5nJ/bit/stream, the packet length
Proceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM'06)
0-7695-2593-8/06 $20.00 2006
IEEE
10
-3
10
-2
10
-1
10
0
10
1
10
2
0
5
10
15
20
25
30
The density of the cluster-heads (
1
)
The optimal transmission range (R
*
)
2
/
1
=100
2
/
1
=5000
=1000
=50
=0
10
-3
10
-2
10
-1
10
0
10
1
10
2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
The density of the cluster-heads (
1
)
*
2
/
1
=100
2
/
1
=5000
=1000
=0
=50
(a)
(b)
Fig. 3.
The optimal parameters versus the density of cluster-heads under different
. (a) The optimal communication range R
. (b) The optimal
.
0.5
1
1.5
2
2.5
3
3.5
4
4.5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Transmission range in meters (R)
The normalized network lifetime
=0
=0.2
=0.5
=0.8
=1
Fig. 4.
The normalized network lifetime versus the transmission range (
R)
with
= 50,
1
= 0.1,
2
/
1
= 5000, and = 0, 0.2, 0.5, 0.8, 1.
l = 120 bits and the path loss exponent n = 2. We consider
various scenarios with three different values of
, two average
numbers of basic sensor nodes in a cluster (
2
/
1
) and various
densities of cluster-heads (
1
). According to the discussion in
Section III, we get the numerical solutions for the optimal
R
,
and then the value of optimal
can be calculated by using
Eq. (28).
The numerical results of optimal
R
and
with different
and (
2
/
1
) are shown in Fig. 3(a) and Fig. 3(b), respectively.
The optimal
R
decreases as
1
increases. With the same
1
,
the larger
, the larger the value of R
. When
is small
(e.g.,
= 0 and = 50), the value of
is small (e.g.,
< 0.1). This implies that the multi-hop communication
mode dominates the single-hop mode. The reason for these
observations includes the following two. First, the cost of
relaying traffics is small since the receiving energy is small.
Second, the transmission energy is sensitive to the transmission
range.
We also conduct the simulation experiments to verify our
analytical results. In our simulations, Minimum Transmission
Energy (MTE) routing algorithm [19], which minimizes the
total energy consumption for sending a packet, is used as the
relaying scheme for the multi-hop communication mode. Let
= 50,
1
= 0.1, and
2
/
1
= 5000. We set the initial
energy of cluster-heads (
E
init2
) high enough to guarantee that
the cluster-heads can have longer lifetime than the basic sensor
nodes. Fig. 4 shows the simulation results of network lifetime.
The plots in Fig. 4 is the average results of 1000 experiments.
It shows that in most cases (i.e.,
= 0, 0.2, 0.5, and 0.8) the
network lifetime is maximized when
R
= 3.5, which agrees
with the numerical results shown in Fig. 3.
In the following simulations, the parameters are set as
follows: the initial energy of basic sensor nodes
E
init2
=
0.01J, the distance between the cluster-heads and the mobile
base station
H = 100m, l = l = 120bits and
2
= 1000.
Fig. 5 shows the network lifetime changes with the average
number of the basic sensor nodes in a cluster by using the
optimal
R
and
when
is equal to 0, 100 and 1000. We
observe that increasing the density of cluster-heads (i.e.,
2
/
1
decreases given constant
2
) does not always help to extend
the network lifetime. For example, the network lifetime can be
increased by 51% when the density of cluster-heads changes
from
1
= 0.01 or
2
/
1
= 10
6
to
1
= 0.1 or
2
/
1
= 10
5
,
while the network lifetime is almost the same when the density
of cluster-heads is greater than
1
= 1 (i.e., the average of
basic sensor nodes
2
/
1
10
4
). On the other hand, Fig. 6
shows the energy consumption of a cluster-head against the
average number of the basic sensor nodes. We find that the
average energy consumption of a cluster-head is proportional
to (
2
/
1
) from Fig. 6. The required initial energy of cluster-heads
increases with the decrease of the density of cluster-heads
(
1
). Thus, there is a trade-off between the network
lifetime and the initial energy of the cluster-heads.
3-D WSN EXTENSION
Our work can be easily extended to a 3-D space model.
The differences between the 3-D space model and the 2-D
space model lie in the deployment models and the connectivity
models.
The probability that a basic sensor node with spherical
coordinate (
r, , ) belongs to a cluster-head located in the
origin is
P r S
(r,,)
V = e
1
4
3
r
3
(30)
Proceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM'06)
0-7695-2593-8/06 $20.00 2006
IEEE
10
1
10
2
10
3
10
4
10
5
10
6
600
700
800
900
1000
1100
1200
1300
The average number of basic sensor nodes in a cluster (
2
/
1
)
Network lifetime in rounds
=0
=100
=1000
Fig. 5. Optimal network lifetime under various situations where
2
= 1000.
10
1
10
2
10
3
10
4
10
5
10
6
10
-1
10
0
10
1
10
2
10
3
The average number of basic sensor nodes in a cluster (
2
/
1
)
Energy consumption of a cluster-head in Joules
=0
=100
=1000
Fig. 6.
The energy consumption of cluster-heads against the number of the
basic sensor nodes.
The average number of basic sensor nodes in a cluster is
determined by
E[N
V
]
=
0
2
0
0
P r S
(r,,)
V
2
r
2
sin drdd
=
0
2
0
0
e
1
4
3
r
3
2
r
2
sin drdd
=
2
1
(31)
In the similar way, we can obtain the number of basic sensor
nodes in the
i-th zone as follows:
E [N
V
]
=
0
2
0
iR
(i-1)R
P r S
(r,,)
V
2
r
2
sin drdd
=
2
1
e
1
4
3
[(i-1)R]
3
- e
1
4
3
(iR)
3
(32)
To satisfy the requirement of connectivity, the minimum
communication range
R
min
can be written as follows:
R
min
=
3
3
4(
1
+
2
) log
(
1
+
2
)
(33)
Again, we can obtain the optimal
and
R
for the 3-D
space model along the same manner as a class of the case of
2-D space WSN model.
CONCLUSION
We investigated the optimal transmission range for a heterogeneous
cluster-based sensor network which consists of
two types of nodes, the super cluster-heads and the basic
sensor nodes. To balance the energy load of the basic sensor
nodes, the mixed communication modes are employed. By
developing the analytical models, we numerically derive the
optimal transmission range
R
and the frequency of single-hop
mode
to achieve the longest network lifetime. Our analyses
also showed that our proposed model can be easily extended
from 2-D to 3-D. The simulation results validated our proposed
analytical models. Our simulation results with the optimal
R
and
indicated that the high density of cluster-heads is not
very helpful for prolonging the network lifetime.
REFERENCES
[1] I. F. Akyildiz, W. Su, Y. Sankarsubramaniam, and E. Cayirci, "Wireless sensor
networks: a survey," Computer Networks, 38 (2002) pp. 393-422.
[2] G. J. Pottie and W. J. Kaiser, "Wireless integrated network sensors," Communications
of the ACM, Vol. 43, No 5, pp 51-58, May 2000.
[3] A. Cerpa, J. Elson, D. Estrin, L. Girod, M. Hamilton, and J. Zhao, "Habitat
monitoring: application driver for wireless communications technology," in Proc.
of ACM SIGCOMM Workshop on Data Communications in Latin America and the
Caribbean, Costa Rica, April 2001.
[4] J. Lundquist, D. Cayan, and M. Dettinger, "Meteorology and hydrology in yosemite
national park: a sensor network application," in Proc. of Information Processing
in Sensor Networks (IPSN), April, 2003.
[5] A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler, and J. Anderson, "Wireless
sensor networks for habitat monitoring," in Proc. of WSNA'02, Atlanta, Georgia,
September 28, 2002.
[6] G. Tolle, D. Gay, W. Hong, J. Polastre, R. Szewczyk, D. Culler, N. Turner, K.
Tu, S. Burgess, T. Dawson, and P. Buonadonna, "A macroscope in the redwoods,"
in 3rd ACM Conference on Embedded Networked Sensor Systems (SenSys), San
Diego, November 2-4, 2005
[7] O. Younis and S. Fahmy, "Distributed clustering in ad-hoc sensor networks: a
hybrid, energy-efficient approach", in Proc. of INFOCOM'04, March 2004
[8] A. Boulis, S. Ganeriwal, and M. B. Srivastava, "Aggregation in sensor networks:
an energy-accuracy trade-off," Elsevier Ad Hoc Networks Journal, Vol. 1, 2003,
pp. 317-331
[9] W. R. Heinzelman, A. Chandrakasan and H. Balakrishman, "Energy-efficient
communication protocol for wireless microsensor networks," in Proc. Of IEEE
HICSS, January 2000.
[10] H. Su and X. Zhang, "Energy-efficient clustering system model and reconfiguration
schemes for wireless sensor networks," in proc. of the 40th Conference on
Information Sciences and Systems (CISS 2006), March 2006.
[11] V. P. Mhatre, C. Rosenberg, D. Kofman, R. Mazumdar and N. Shroff, "A minmum
cost heterogeneous sensor network with a lifetime constraint," IEEE Trans. on
Mobile Computing, Vol.4, No.1, Jan./Feb. 2005, pp.4-15.
[12] M. Bhardwaj and A. P. Chanrakasan, "Bouding the lifetime of sensor networks
via optimal role assignments," in Proc. of INFOCOM'02, pp.1587-1596, 2002.
[13] V. Mhatre and C. Rosenberg, "Design guidelines for wireless sensor networks:
communication, clustering and aggregation", Elsevier Ad Hoc Networks Journal,
Vol. 2, 2004, pp. 45-63.
[14] S. G. Foss and S. A. Zuyev, "On a voronoi aggregative process related to a bivariate
poisson process," Advances in Applied Probability, vol. 28, no. 4, 1996, pp. 965-981
.
[15] J. H. Chang and L. Tassiulas, "Energy conserving routing in wireless ad hoc
networks," in Proc. INFOCOM, Tel Aviv, Israel, March 2000, pp. 22-31.
[16] T. Rappaport, Wireless Communication Priciples and Practice (2nd Edition). Upper
Saddle River, N.J. Prentice Hall PTR, 2002.
[17] A. Wang, W. Heinzelman, and A. Chandrakasan, "Energy-scalable protocols for
battery-operated microsensor networks," in Proc. of the IEEE workshop on Signal
Processing Systems (SiPS'99), pp. 483-492, 1999.
[18] P. Gupta and P. R. Kumar, Critical power for asymptotic connectivity in wireless
networks, in W. M. McEneany, G. Yin, Q. Zhang (Editors), Stochastic Analysis,
Control, Optimization and Applications: A Volume in Honor of W. H. Fleming,
Birkhauser, Boston, MA, 1998, pp. 547-566.
[19] T. Shepard, "Decentralized channel management in scalable multihop spread
spectrum packet radio networks," Massachusetts Inst. of Technol., Lab. for Comput.
Sci., Cambridge, Tech. Rep. MIT/LCS/ TR-670, July 1995.
Proceedings of the 2006 International Symposium on a World of Wireless, Mobile and Multimedia
Networks (WoWMoM'06)
0-7695-2593-8/06 $20.00 2006
IEEE | Wireless sensor networks;heterogeneous cluster-based sensor network;optimization;network lifetime;optimal transmission range;energy optimization;Voronoi cell;numerical model;clustering |
146 | Parallel Crawlers | In this paper we study how we can design an effective parallel crawler. As the size of the Web grows, it becomes imperative to parallelize a crawling process, in order to finish downloading pages in a reasonable amount of time. We first propose multiple architectures for a parallel crawler and identify fundamental issues related to parallel crawling. Based on this understanding, we then propose metrics to evaluate a parallel crawler, and compare the proposed architectures using 40 million pages collected from the Web. Our results clarify the relative merits of each architecture and provide a good guideline on when to adopt which architecture. | INTRODUCTION
A crawler is a program that downloads and stores Web
pages, often for a Web search engine. Roughly, a crawler
starts off by placing an initial set of URLs, S
, in a queue,
where all URLs to be retrieved are kept and prioritized.
From this queue, the crawler gets a URL (in some order),
downloads the page, extracts any URLs in the downloaded
page, and puts the new URLs in the queue. This process is
repeated until the crawler decides to stop. Collected pages
are later used for other applications, such as a Web search
engine or a Web cache.
As the size of the Web grows, it becomes more difficult to
retrieve the whole or a significant portion of the Web using
a single process. Therefore, many search engines often run
multiple processes in parallel to perform the above task, so
that download rate is maximized. We refer to this type of
crawler as a parallel crawler.
In this paper we study how we should design a parallel
crawler, so that we can maximize its performance (e.g.,
download rate) while minimizing the overhead from parallelization
. We believe many existing search engines already
use some sort of parallelization, but there has been little
scientific research conducted on this topic. Thus, little has
been known on the tradeoffs among various design choices
for a parallel crawler. In particular, we believe the following
issues make the study of a parallel crawler challenging and
interesting:
Overlap: When multiple processes run in parallel to
download pages, it is possible that different processes
download the same page multiple times. One process
may not be aware that another process has already
downloaded the page. Clearly, such multiple downloads
should be minimized to save network bandwidth
and increase the crawler's effectiveness. Then how can
we coordinate the processes to prevent overlap?
Quality: Often, a crawler wants to download "important"
pages first, in order to maximize the "quality"
of the downloaded collection. However, in a parallel
crawler, each process may not be aware of the whole
image of the Web that they have collectively downloaded
so far. For this reason, each process may make
a crawling decision solely based on its own image of
the Web (that itself has downloaded) and thus make
a poor crawling decision. Then how can we make sure
that the quality of the downloaded pages is as good for
a parallel crawler as for a centralized one?
Communication bandwidth: In order to prevent overlap
, or to improve the quality of the downloaded pages,
crawling processes need to periodically communicate
to coordinate with each other. However, this communication
may grow significantly as the number of crawling
processes increases. Exactly what do they need to
communicate and how significant would this overhead
be? Can we minimize this communication overhead
while maintaining the effectiveness of the crawler?
While challenging to implement, we believe that a parallel
crawler has many important advantages, compared to a
single-process crawler:
Scalability: Due to enormous size of the Web, it is often
imperative to run a parallel crawler. A single-process
crawler simply cannot achieve the required download
rate in certain cases.
Network-load dispersion: Multiple crawling processes
of a parallel crawler may run at geographically distant
locations, each downloading "geographically-adjacent"
pages. For example, a process in Germany may download
all European pages, while another one in Japan
crawls all Asian pages. In this way, we can disperse
the network load to multiple regions. In particular,
this dispersion might be necessary when a single network
cannot handle the heavy load from a large-scale
crawl.
Network-load reduction: In addition to the dispersing
load, a parallel crawler may actually reduce the network
load.
For example, assume that a crawler in
North America retrieves a page from Europe. To be
downloaded by the crawler, the page first has to go
through the network in Europe, then the Europe-to-North
America inter-continental network and finally
the network in North America. Instead, if a crawling
process in Europe collects all European pages, and
if another process in North America crawls all North
American pages, the overall network load will be reduced
, because pages go through only "local" networks.
Note that the downloaded pages may need to be transferred
later to a central location, so that a central index
can be built. However, even in that case, we believe
that the transfer can be significantly smaller than the
original page download traffic, by using some of the
following methods:
Compression: Once the pages are collected and
stored, it is easy to compress the data before sending
them to a central location.
Difference: Instead of sending the entire image
with all downloaded pages, we may first take difference
between previous image and the current
one and send only this difference. Since many
pages are static and do not change very often,
this scheme can significantly reduce the network
traffic.
Summarization: In certain cases, we may need
only a central index, not the original pages themselves
. In this case, we may extract the necessary
information for the index construction (e.g., postings
list) and transfer this data only.
To build an effective web crawler, we clearly need to address
many more challenges than just parallelization. For
example, a crawler needs to figure out how often a page
changes and how often it would revisit the page in order to
maintain the page up to date [7, 10]. Also, it has to make
sure that a particular Web site is not flooded with its HTTP
requests during a crawl [17, 12, 24]. In addition, it has to
carefully select what page to download and store in its limited
storage space in order to make the best use of its stored
collection of pages [9, 5, 11]. While all of these issues are
important, we focus on the crawler parallelization in this
paper, because this problem has been paid significantly less
attention than the others.
In summary, we believe a parallel crawler has many advantages
and poses interesting challenges. In particular, we
believe our paper makes the following contributions:
We identify major issues and problems related to a
parallel crawler and discuss how we can solve these
problems.
We present multiple techniques for a parallel crawler
and discuss their advantages and disadvantages. As far
as we know most of these techniques have not been described
in open literature. (Very little is known about
the internals of crawlers, as they are closely guarded
secrets.)
Using a large dataset (40M web pages) collected from
the Web, we experimentally compare the design choices
and study their tradeoffs quantitatively.
We propose various optimization techniques that can
minimize the coordination effort between crawling processes
, so that they can operate more independently
while maximizing their effectiveness.
1.1
Related work
Web crawlers have been studied since the advent of the
Web [18, 23, 4, 22, 14, 6, 19, 12, 9, 5, 11, 10, 7]. These
studies can be roughly categorized into one of the following
topics:
General architecture [22, 14, 6, 19, 12]: The work in
this category describes the general architecture of a
Web crawler and studies how a crawler works. For example
, Reference [14] describes the architecture of the
Compaq SRC crawler and its major design goals. Some
of these studies briefly describe how the crawling task
is parallelized. For instance, Reference [22] describes
a crawler that distributes individual URLs to multiple
machines, which download Web pages in parallel.
The downloaded pages are then sent to a central machine
, on which links are extracted and sent back to
the crawling machines. However, these studies do not
carefully compare various issues related to a parallel
crawler and how design choices affect performance. In
this paper, we first identify multiple techniques for a
parallel crawler and compare their relative merits using
real Web data.
Page selection [9, 5, 11]: Since many crawlers can
download only a small subset of the Web, crawlers need
to carefully decide what page to download. By retrieving
"important" or "relevant" pages early, a crawler
may improve the "quality" of the downloaded pages.
The studies in this category explore how a crawler can
discover and identify "important" pages early, and propose
various algorithms to achieve this goal. In our paper
, we study how parallelization affects some of these
techniques and explain how we can fix the problems
introduced by parallelization.
Page update [10, 7]: Web crawlers need to update the
downloaded pages periodically, in order to maintain
the pages up to date. The studies in this category
discuss various page revisit policies to maximize the
"freshness" of the downloaded pages.
For example,
Reference [7] studies how a crawler should adjust revisit
frequencies for pages when the pages change at
different rates. We believe these studies are orthogonal
to what we discuss in this paper.
There also exists a significant body of literature studying
the general problem of parallel and distributed computing
[21, 25]. Some of these studies focus on the design of efficient
parallel algorithms. For example, References [20, 16]
125
present various architectures for parallel computing, propose
algorithms that solve various problems (e.g., finding
maximum cliques) under the architecture, and study the
complexity of the proposed algorithms. While the general
principles described are being used in our work,
1
none of
the existing solutions can be directly applied to the crawling
problem.
Another body of literature designs and implements distributed
operating systems, where a process can use distributed
resources transparently (e.g., distributed memory,
distributed file systems) [25, 1]. Clearly, such OS-level support
makes it easy to build a general distributed application
, but we believe that we cannot simply run a centralized
crawler on a distributed OS to achieve parallelism. A web
crawler contacts millions of web sites in a short period of
time and consumes extremely large network, storage and
memory resources. Since these loads push the limit of existing
hardwares, the task should be carefully partitioned
among processes and they should be carefully coordinated.
Therefore, a general-purpose distributed operating system
that does not understand the semantics of web crawling will
lead to unacceptably poor performance.
ARCHITECTURE OF A PARALLEL CRAWLER
In Figure 1 we illustrate the general architecture of a parallel
crawler. A parallel crawler consists of multiple crawling
processes, which we refer to as
C-proc's. Each C-proc performs
the basic tasks that a single-process crawler conducts.
It downloads pages from the Web, stores the pages locally,
extracts URLs from the downloaded pages and follows links.
Depending on how the
C-proc's split the download task,
some of the extracted links may be sent to other
C-proc's.
The
C-proc's performing these tasks may be distributed either
on the same local network or at geographically distant
locations.
Intra-site parallel crawler: When all C-proc's run on
the same local network and communicate through a
high speed interconnect (such as LAN), we call it an
intra-site parallel crawler. In Figure 1, this scenario
corresponds to the case where all
C-proc's run only on
the local network on the top. In this case, all
C-proc's
use the same local network when they download pages
from remote Web sites. Therefore, the network load
from
C-proc's is centralized at a single location where
they operate.
Distributed crawler: When C-proc's run at geographically
distant locations connected by the Internet (or
a wide area network), we call it a distributed crawler.
For example, one
C-proc may run in the US, crawling
all US pages, and another
C-proc may run in France,
crawling all European pages. As we discussed in the
introduction, a distributed crawler can disperse and
even reduce the load on the overall network.
When
C-proc's run at distant locations and communicate
through the Internet, it becomes important how
often and how much
C-proc's need to communicate.
The bandwidth between
C-proc's may be limited and
1
For example, we may consider that our proposed solution
is a variation of "divide and conquer" approach, since we
partition and assign the Web to multiple processes.
C-proc
. . .
C-proc
Local connect
C-proc
collected pages
queues of URLs to visit
. . .
C-proc
Local connect
NET
INTER
Figure 1: General architecture of a parallel crawler
1
S
2
S
1
2
(C )
(C )
b
a
c
d
e
f
g
h
i
Figure 2: Site S
1
is crawled by C
1
and site S
2
is
crawled by C
2
sometimes unavailable, as is often the case with the
Internet.
When multiple
C-proc's download pages in parallel, different
C-proc's may download the same page multiple times. In
order to avoid this overlap,
C-proc's need to coordinate with
each other on what pages to download. This coordination
can be done in one of the following ways:
Independent: At one extreme, C-proc's may download
pages totally independently without any coordination.
That is, each
C-proc starts with its own set of seed
URLs and follows links without consulting with other
C-proc's. In this scenario, downloaded pages may overlap
, but we may hope that this overlap will not be significant
, if all
C-proc's start from different seed URLs.
While this scheme has minimal coordination overhead
and can be very scalable, we do not directly study
this option due to its overlap problem. Later we will
consider an improved version of this option, which significantly
reduces overlap.
Dynamic assignment: When there exists a central coordinator
that logically divides the Web into small partitions
(using a certain partitioning function) and dy-namically
assigns each partition to a
C-proc for download
, we call it dynamic assignment.
For example, assume that a central coordinator partitions
the Web by the site name of a URL. That
is, pages in the same site (e.g., http://cnn.com/top.
html and http://cnn.com/content.html) belong to
126
the same partition, while pages in different sites belong
to different partitions. Then during a crawl, the
central coordinator constantly decides on what partition
to crawl next (e.g., the site cnn.com) and sends
URLs within this partition (that have been discovered
so far) to a
C-proc as seed URLs. Given this request,
the
C-proc downloads the pages and extracts links from
them. When the extracted links point to pages in the
same partition (e.g., http://cnn.com/article.html),
the
C-proc follows the links, but if a link points to a
page in another partition (e.g., http://nytimes.com/
index.html), the
C-proc reports the link to the central
coordinator. The central coordinator later uses
this link as a seed URL for the appropriate partition.
Note that the Web can be partitioned at various gran-ularities
. At one extreme, the central coordinator may
consider every page as a separate partition and assign
individual URLs to
C-proc's for download. In this case,
a
C-proc does not follow links, because different pages
belong to separate partitions.
It simply reports all
extracted URLs back to the coordinator. Therefore,
the communication between a
C-proc and the central
coordinator may vary dramatically, depending on the
granularity of the partitioning function.
Static assignment: When the Web is partitioned and
assigned to each
C-proc before they start to crawl, we
call it static assignment. In this case, every
C-proc
knows which
C-proc is responsible for which page during
a crawl, and the crawler does not need a central
coordinator. We will shortly discuss in more detail
how
C-proc's operate under this scheme.
In this paper, we mainly focus on static assignment because
of its simplicity and scalability, and defer the study of
dynamic assignment to future work. Note that in dynamic
assignment, the central coordinator may become the major
bottleneck, because it has to maintain a large number of
URLs reported from all
C-proc's and has to constantly coordinate
all
C-proc's. Thus the coordinator itself may also
need to be parallelized.
CRAWLING MODES FOR STATIC ASSIGNMENT
Under static assignment, each
C-proc is responsible for
a certain partition of the Web and has to download pages
within the partition. However, some pages in the partition
may have links to pages in another partition. We refer to
this type of link as an inter-partition link. To illustrate how
a
C-proc may handle inter-partition links, we use Figure 2 as
our example. In the figure, we assume two
C-proc's, C
1
and
C
2
, are responsible for sites S
1
and S
2
, respectively. For
now, we assume that the Web is partitioned by sites and
that the Web has only S
1
and S
2
. Also, we assume that
each
C-proc starts its crawl from the root page of each site,
a and f.
1. Firewall mode: In this mode, each
C-proc downloads
only the pages within its partition and does not follow
any inter-partition link. All inter-partition links are
ignored and thrown away. For example, the links a
g, c
g and h d in Figure 2 are ignored and thrown
away by C
1
and C
2
.
In this mode, the overall crawler does not have any
overlap in the downloaded pages, because a page can
be downloaded by only one
C-proc, if ever. However,
the overall crawler may not download all pages that it
has to download, because some pages may be reachable
only through inter-partition links. For example,
in Figure 2, C
1
can download a, b and c, but not d and
e, because they can be reached only through h
d
link. However,
C-proc's can run quite independently
in this mode, because they do not conduct any run-time
coordination or URL exchanges.
2. Cross-over mode: Primarily, each
C-proc downloads
pages within its partition, but when it runs out of
pages in its partition, it also follows inter-partition
links. For example, consider C
1
in Figure 2. Process
C
1
first downloads pages a, b and c by following links
from a. At this point, C
1
runs out of pages in S
1
, so
it follows a link to g and starts exploring S
2
. After
downloading g and h, it discovers a link to d in S
1
, so
it comes back to S
1
and downloads pages d and e.
In this mode, downloaded pages may clearly overlap
(pages g and h are downloaded twice), but the overall
crawler can download more pages than the firewall
mode (C
1
downloads d and e in this mode). Also, as
in the firewall mode,
C-proc's do not need to communicate
with each other, because they follow only the
links discovered by themselves.
3. Exchange mode: When
C-proc's periodically and
incrementally exchange inter-partition URLs, we say
that they operate in an exchange mode. Processes do
not follow inter-partition links.
For example, C
1
in Figure 2 informs C
2
of page g after
it downloads page a (and c) and C
2
transfers the URL
of page d to C
1
after it downloads page h. Note that
C
1
does not follow links to page g. It only transfers
the links to C
2
, so that C
2
can download the page. In
this way, the overall crawler can avoid overlap, while
maximizing coverage.
Note that the firewall and the cross-over modes give
C-proc's
much independence (C-proc's do not need to communicate
with each other), but they may download the same
page multiple times, or may not download some pages. In
contrast, the exchange mode avoids these problems but requires
constant URL exchange between
C-proc's.
3.1
URL exchange minimization
To reduce URL exchange, a crawler based on the exchange
mode may use some of the following techniques:
1. Batch communication: Instead of transferring an
inter-partition URL immediately after it is discovered,
a
C-proc may wait for a while, to collect a set of URLs
and send them in a batch. That is, with batching, a
C-proc collects all inter-partition URLs until it downloads
k pages. Then it partitions the collected URLs
and sends them to an appropriate
C-proc. Once these
URLs are transferred, the
C-proc then purges them
and starts to collect a new set of URLs from the next
downloaded pages. Note that a
C-proc does not maintain
the list of all inter-partition URLs discovered so
far. It only maintains the list of inter-partition links
127
in the current batch, in order to minimize the memory
overhead for URL storage.
This batch communication has various advantages over
incremental communication. First, it incurs less communication
overhead, because a set of URLs can be
sent in a batch, instead of sending one URL per message
. Second, the absolute number of exchanged URLs
will also decrease. For example, consider C
1
in Figure
2. The link to page g appears twice, in page a and
in page c. Therefore, if C
1
transfers the link to g after
downloading page a, it needs to send the same URL
again after downloading page c.
2
In contrast, if C
1
waits until page c and sends URLs in batch, it needs
to send the URL for g only once.
2. Replication: It is known that the number of incoming
links to pages on the Web follows a Zipfian distribution
[3, 2, 26]. That is, a small number of Web
pages have an extremely large number of links pointing
to them, while a majority of pages have only a small
number of incoming links.
Thus, we may significantly reduce URL exchanges, if
we replicate the most "popular" URLs at each
C-proc
(by most popular, we mean the URLs with most incoming
links) and stop transferring them between
C-proc's
. That is, before we start crawling pages, we
identify the most popular k URLs based on the image
of the Web collected in a previous crawl. Then
we replicate these URLs at each
C-proc, so that the
C-proc's do not exchange them during a crawl. Since
a small number of Web pages have a large number of
incoming links, this scheme may significantly reduce
URL exchanges between
C-proc's, even if we replicate
a small number of URLs.
Note that some of the replicated URLs may be used
as the seed URLs for a
C-proc. That is, if some URLs
in the replicated set belong to the same partition that
a
C-proc is responsible for, the C-proc may use those
URLs as its seeds rather than starting from other pages.
Also note that it is possible that each
C-proc tries to
discover popular URLs on the fly during a crawl, instead
of identifying them based on the previous image
. For example, each
C-proc may keep a "cache"
of recently seen URL entries. This cache may pick
up "popular" URLs automatically, because the popular
URLs show up repeatedly. However, we believe
that the popular URLs from a previous crawl will be a
good approximation for the popular URLs in the current
Web; Most popular Web pages (such as Yahoo!)
maintain their popularity for a relatively long period
of time, even if their exact popularity may change
slightly.
3.2
Partitioning function
So far, we have mainly assumed that the Web pages are
partitioned by Web sites. Clearly, there exists a multitude
of ways to partition the Web, including the following:
1. URL-hash based: Based on the hash value of the
URL of a page, we assign the page to a
C-proc. In
2
When it downloads page c, it does not remember whether
the link to g has been already sent.
this scheme, pages in the same site can be assigned
to different
C-proc's. Therefore, the locality of link
structure
3
is not reflected in the partition, and there
will be many inter-partition links.
2. Site-hash based: Instead of computing the hash value
on an entire URL, we compute the hash value only
on the site name of a URL (e.g., cnn.com in http:
//cnn.com/index.html) and assign the page to a
C-proc
.
In this scheme, note that the pages in the same site
will be allocated to the same partition. Therefore, only
some of the inter-site links will be inter-partition links,
and thus we can reduce the number of inter-partition
links quite significantly compared to the URL-hash
based scheme.
3. Hierarchical: Instead of using a hash-value, we may
partition the Web hierarchically based on the URLs
of pages. For example, we may divide the Web into
three partitions (the pages in the .com domain, .net
domain and all other pages) and allocate them to three
C-proc's. Even further, we may decompose the Web by
language or country (e.g., .mx for Mexico).
Because pages hosted in the same domain or country
may be more likely to link to pages in the same domain,
scheme may have even fewer inter-partition links than
the site-hash based scheme.
In this paper, we do not consider the URL-hash based scheme,
because it generates a large number of inter-partition links.
When the crawler uses URL-hash based scheme,
C-proc's
need to exchange much larger number of URLs (exchange
mode), and the coverage of the overall crawler can be much
lower (firewall mode).
In addition, in our later experiments, we will mainly use
the site-hash based scheme as our partitioning function. We
chose this option because it is much simpler to implement,
and because it captures the core issues that we want to
study.
For example, under the hierarchical scheme, it is
not easy to divide the Web into equal size partitions, while
it is relatively straightforward under the site-hash based
scheme.
4
Also, we believe we can interpret the results from
the site-hash based scheme as the upper/lower bound for
the hierarchical scheme. For instance, assuming Web pages
link to more pages in the same domain, the number of inter-partition
links will be lower in the hierarchical scheme than
in the site-hash based scheme (although we could not confirm
this trend in our experiments).
In Figure 3, we summarize the options that we have discussed
so far.
The right-hand table in the figure shows
more detailed view on the static coordination scheme. In
the diagram, we highlight the main focus of our paper with
dark grey. That is, we mainly study the static coordination
scheme (the third column in the left-hand table) and we use
the site-hash based partitioning for our experiments (the
second row in the second table). However, during our discussion
, we will also briefly explore the implications of other
3
According to our experiments, about 90% of the links in a
page point to pages in the same site on average.
4
While the sizes of individual Web sites vary, the sizes of
partitions are similar, because each partition contains many
Web sites and their average sizes are similar among partitions
.
128
Type
URL-hash
Site-hash
Hierarchical
Distributed
Intra-site
Independent
Dynamic
Static
Batch Replication None
Main focus
Also discussed
Partitioning
Exchange
Coordination
Firewall
Cross-over
Mode
Not discussed further
Figure 3: Summary of the options discussed
options. For instance, the firewall mode is an "improved"
version of the independent coordination scheme (in the first
table), so our study on the firewall mode will show the implications
of the independent coordination scheme. Also, we
roughly estimate the performance of the URL-hash based
scheme (first row in the second table) when we discuss the
results from the site-hash based scheme.
Given our table of crawler design space, it would be very
interesting to see what options existing search engines selected
for their own crawlers. Unfortunately, this information
is impossible to obtain in most cases because companies
consider their technologies proprietary and want to keep
them secret. The only two crawler designs that we know of
are the prototype Google crawler [22] (when it was developed
at Stanford) and the Mercator crawler [15] at Compaq.
The prototype google crawler used the intra-site, static and
site-hash based scheme and ran in exchange mode [22]. The
Mercator crawler uses the site-based hashing scheme.
EVALUATION MODELS
In this section, we define metrics that will let us quantify
the advantages or disadvantages of different parallel crawling
schemes. These metrics will be used later in our experiments
.
1. Overlap: When multiple
C-proc's are downloading
Web pages simultaneously, it is possible that different
C-proc's download the same page multiple times.
Multiple downloads of the same page are clearly undesirable
.
More precisely, we define the overlap of downloaded
pages as
N-I
I
. Here, N represents the total number of
pages downloaded by the overall crawler, and I represents
the number of unique pages downloaded, again,
by the overall crawler. Thus, the goal of a parallel
crawler is to minimize the overlap.
Note that a parallel crawler does not have an overlap
problem, if it is based on the firewall mode (Section 3,
Item 1) or the exchange mode (Section 3, Item 3). In
these modes, a
C-proc downloads pages only within its
own partition, so the overlap is always zero.
2. Coverage: When multiple
C-proc's run independently,
it is possible that they may not download all pages that
they have to. In particular, a crawler based on the firewall
mode (Section 3, Item 1) may have this problem,
because its
C-proc's do not follow inter-partition links
nor exchange the links with others.
To formalize this notion, we define the coverage of
downloaded pages as
I
U
, where U represents the total
number of pages that the overall crawler has to download
, and I is the number of unique pages downloaded
by the overall crawler. For example, in Figure 2, if C
1
downloaded pages a, b and c, and if C
2
downloaded
pages f through i, the coverage of the overall crawler
is
7
9
= 0.77, because it downloaded 7 pages out of 9.
3. Quality: Often, crawlers cannot download the whole
Web, and thus they try to download an "important" or
"relevant" section of the Web. For example, a crawler
may have storage space only for 1 million pages and
may want to download the "most important" 1 million
pages. To implement this policy, a crawler needs
a notion of "importance" of pages, often called an importance
metric [9].
For example, let us assume that the crawler uses backlink
count as its importance metric. That is, the crawler
considers a page p important when a lot of other pages
point to it. Then the goal of the crawler is to download
the most highly-linked 1 million pages. To achieve this
goal, a single-process crawler may use the following
method [9]: The crawler constantly keeps track of how
many backlinks each page has from the pages that it
has already downloaded, and first visits the page with
the highest backlink count. Clearly, the pages downloaded
in this way may not be the top 1 million pages,
because the page selection is not based on the entire
Web, only on what has been seen so far. Thus, we may
formalize the notion of "quality" of downloaded pages
as follows [9]:
First, we assume a hypothetical oracle crawler, which
knows the exact importance of every page under a certain
importance metric. We assume that the oracle
crawler downloads the most important N pages in total
, and use P
N
to represent that set of N pages. We
also use A
N
to represent the set of N pages that an actual
crawler would download, which would not be necessarily
the same as P
N
. Then we define
|A
N
P
N
|
|P
N
|
as
the quality of downloaded pages by the actual crawler.
Under this definition, the quality represents the fraction
of the true top N pages that are downloaded by
129
the crawler.
Note that the quality of a parallel crawler may be
worse than that of a single-process crawler, because
many importance metrics depend on the global structure
of the Web (e.g., backlink count). That is, each
C-proc
in a parallel crawler may know only the pages that
are downloaded by itself, and thus have less information
on page importance than a single-process crawler
does.
On the other hand, a single-process crawler
knows all pages it has downloaded so far. Therefore, a
C-proc in a parallel crawler may make a worse crawling
decision than a single-process crawler.
In order to avoid this quality problem,
C-proc's need to
periodically exchange information on page importance.
For example, if the backlink count is the importance
metric, a
C-proc may periodically notify other C-proc's
of how many pages in its partition have links to pages
in other partitions.
Note that this backlink exchange can be naturally incorporated
in an exchange mode crawler (Section 3,
Item 3). In this mode, crawling processes exchange
inter-partition URLs periodically, so a
C-proc can simply
count how many inter-partition links it receives
from other
C-proc's, to count backlinks originating in
other partitions. More precisely, if the crawler uses the
batch communication technique (Section 3.1, Item 1),
process C
1
would send a message like [http://cnn.
com/index.html, 3] to C
2
, to notify that C
1
has seen
3 links to the page in the current batch.
5
On receipt of
this message, C
2
then increases the backlink count for
the page by 3 to reflect the inter-partition links. By
incorporating this scheme, we believe that the quality
of the exchange mode will be better than that of the
firewall mode or the cross-over mode.
However, note that the quality of an exchange mode
crawler may vary depending on how often it exchanges
backlink messages. For instance, if crawling processes
exchange backlink messages after every page download
, they will have essentially the same backlink information
as a single-process crawler does. (They know
backlink counts from all pages that have been downloaded
.)
Therefore, the quality of the downloaded
pages would be virtually the same as that of a single-process
crawler.
In contrast, if
C-proc's rarely exchange
backlink messages, they do not have "accurate"
backlink counts from downloaded pages, so they may
make poor crawling decisions, resulting in poor quality
. Later, we will study how often
C-proc's should
exchange backlink messages in order to maximize the
quality.
4. Communication overhead: The
C-proc's in a parallel
crawler need to exchange messages to coordinate
their work. In particular,
C-proc's based on the exchange
mode (Section 3, Item 3) swap their inter-partition
URLs periodically. To quantify how much
communication is required for this exchange, we define
communication overhead as the average number of
inter-partition URLs exchanged per downloaded page.
5
If the
C-proc's send inter-partition URLs incrementally after
every page, the
C-proc's can send the URL only, and
other
C-proc's can simply count these URLs.
Mode
Coverage
Overlap
Quality
Communication
Firewall
Bad
Good
Bad
Good
Cross-over
Good
Bad
Bad
Good
Exchange
Good
Good
Good
Bad
Table 1: Comparison of three crawling modes
For example, if a parallel crawler has downloaded 1,000
pages in total and if its
C-proc's have exchanged 3,000
inter-partition URLs, its communication overhead is
3,000/1,000 = 3. Note that a crawler based on the the
firewall and the cross-over mode do not have any communication
overhead, because they do not exchange
any inter-partition URLs.
In Table 1, we compare the relative merits of the three
crawling modes (Section 3, Items 13). In the table, "Good"
means that the mode is expected to perform relatively well
for that metric, and "Bad" means that it may perform worse
compared to other modes. For instance, the firewall mode
does not exchange any inter-partition URLs (Communication
: Good) and downloads pages only once (Overlap: Good),
but it may not download every page (Coverage: Bad). Also,
because
C-proc's do not exchange inter-partition URLs, the
downloaded pages may be of lower quality than those of an
exchange mode crawler. Later, we will examine these issues
more quantitatively through experiments based on real Web
data.
DESCRIPTION OF DATASET
We have discussed various issues related to a parallel
crawler and identified multiple alternatives for its architecture
. In the remainder of this paper, we quantitatively study
these issues through experiments conducted on real Web
data.
In all of the following experiments, we used 40 million
Web pages in our Stanford WebBase repository. Because
the property of this dataset may significantly impact the
result of our experiments, readers might be interested in
how we collected these pages.
We downloaded the pages using our Stanford WebBase
crawler in December 1999 in the period of 2 weeks.
In
downloading the pages, the WebBase crawler started with
the URLs listed in Open Directory (http://www.dmoz.org),
and followed links. We decided to use the Open Directory
URLs as seed URLs, because these pages are the ones that
are considered "important" by its maintainers. In addition,
some of our local WebBase users were keenly interested in
the Open Directory pages and explicitly requested that we
cover them. The total number of URLs in the Open Directory
was around 1 million at that time. Then conceptually,
the WebBase crawler downloaded all these pages, extracted
URLs within the downloaded pages, and followed links in a
breadth-first manner. (The WebBase crawler uses various
techniques to expedite and prioritize crawling process, but
we believe these optimizations do not affect the final dataset
significantly.)
Our dataset is relatively "small" (40 million pages) compared
to the full Web, but keep in mind that using a significantly
larger dataset would have made many of our experiments
prohibitively expensive. As we will see, each of
the graphs we present study multiple configurations, and
for each configuration, multiple crawler runs were made to
130
2
4
8
16
32
64
0.2
0.4
0.6
0.8
1
Number of
C-proc
's
Coverage
n
Figure 4: Number of processes vs. Coverage
64
4096
10000
20000
30000
0.2
0.4
0.6
0.8
1
64
32
8
2
processes
processes
processes
processes
Coverage
Number of Seeds
s
Figure 5: Number of seed URLs vs. Coverage
obtain statistically valid data points. Each run involves simulating
how one or more
C-proc's would visit the 40 million
pages. Such detailed simulations are inherently very time
consuming.
It is clearly difficult to predict what would happen for a
larger dataset. In the extended version of this paper [8],
we examine this data size issue a bit more carefully and
discuss whether a larger dataset would have changed our
conclusions.
FIREWALL MODE AND COVERAGE
A firewall mode crawler (Section 3, Item 1) has minimal
communication overhead, but it may have coverage and
quality problems (Section 4). In this section, we quantitatively
study the effectiveness of a firewall mode crawler
using the 40 million pages in our repository. In particular,
we estimate the coverage (Section 4, Item 2) of a firewall
mode crawler when it employs n
C-proc's in parallel. (We
discuss the quality issue of a parallel crawler later.)
In our experiments, we considered the 40 million pages
within our WebBase repository as the entire Web, and we
used site-hash based partitioning (Section 3.2, Item 2). As
the seed URLs, each
C-proc was given 5 random URLs from
its own partition, so 5n seed URLs were used in total by the
overall crawler. (We discuss the effect of the number of seed
URLs shortly.) Since the crawler ran in firewall mode,
C-proc's
followed only intra-partition links, not inter-partition
links. Under these settings, we let the
C-proc's run until
they ran out of URLs. After this simulated crawling, we
measured the overall coverage of the crawler. We performed
these experiments with 5n random seed URLS and repeated
the experiments multiple times with different seed URLs. In
all of the runs, the results were essentially the same.
In Figure 4, we summarize the results from the experiments
. The horizontal axis represents n, the number of
parallel
C-proc's, and the vertical axis shows the coverage of
the overall crawler for the given experiment. Note that the
coverage is only 0.9 even when n = 1 (a single-process). This
result is because the crawler in our experiment started with
only 5 URLs, while the actual dataset was collected with 1
million seed URLs. Thus, some of the 40 million pages were
unreachable from the 5 seed URLs.
From the figure it is clear that the coverage decreases as
the number of processes increases. This trend is because the
number of inter-partition links increases as the Web is split
into smaller partitions, and thus more pages are reachable
only through inter-partition links.
From this result we can see that we may run a crawler in a
firewall mode without much decrease in coverage with fewer
than 4
C-proc's. For example, for the 4-process case, the
coverage decreases only 10% from the single-process case.
At the same time, we can also see that the firewall mode
crawler becomes quite ineffective with a large number of
C-proc's
. Less than 10% of the Web can be downloaded when
64
C-proc's run together, each starting with 5 seed URLs.
Clearly, coverage may depend on the number of seed URLs
that each
C-proc starts with. To study this issue, we also
ran experiments varying the number of seed URLs, s, and
we show the results in Figure 5.
The horizontal axis in
the graph represents s, the total number of seed URLs that
the overall crawler used, and the vertical axis shows the
coverage for that experiment. For example, when s = 128,
the overall crawler used 128 total seed URLs, each
C-proc
starting with 2 seed URLs when 64
C-proc's ran in parallel.
We performed the experiments for 2, 8, 32, 64
C-proc cases
and plotted their coverage values. From this figure, we can
observe the following trends:
When a large number of C-proc's run in parallel (e.g.,
32 or 64), the total number of seed URLs affects the
coverage very significantly. For example, when 64 processes
run in parallel the coverage value jumps from
0.4% to 10% if the number of seed URLs increases
from 64 to 1024.
When only a small number of processes run in parallel
(e.g., 2 or 8), coverage is not significantly affected by
the number of seed URLs. While coverage increases
slightly as s increases, the improvement is marginal.
Based on these results, we draw the following conclusions:
1. When a relatively small number of
C-proc's are running
in parallel, a crawler using the firewall mode provides
good coverage. In this case, the crawler may start with
only a small number of seed URLs, because coverage
is not much affected by the number of seed URLs.
2. The firewall mode is not a good choice if the crawler
wants to download every single page on the Web. The
crawler may miss some portion of the Web, particularly
when it runs many
C-proc's in parallel.
Example 1. (Generic search engine) To illustrate how
our results could guide the design of a parallel crawler, consider
the following example. Assume that to operate a Web
search engine, we need to download 1 billion pages
6
in one
6
Currently the Web is estimated to have around 1 billion
pages.
131
0.2
0.4
0.6
0.8
1
0.5
1
1.5
2
2.5
3
overlap
coverage
n=64
n=32
: number of
n
's
C-proc
n=4
n=8
n=2
n=16
Figure 6:
Coverage vs. Overlap for a cross-over
mode crawler
month. Each machine that we plan to run our
C-proc's on
has 10 Mbps link to the Internet, and we can use as many
machines as we want.
Given that the average size of a web page is around 10K
bytes, we roughly need to download 10
4
10
9
= 10
13
bytes
in one month. This download rate corresponds to 34 Mbps,
and we need 4 machines (thus 4
C-proc's) to obtain the rate.
Given the results of our experiment (Figure 4), we may estimate
that the coverage will be at least 0.8 with 4
C-proc's.
Therefore, in this scenario, the firewall mode may be good
enough, unless it is very important to download the "entire"
Web.
Example 2. (High freshness) As a second example, let us
now assume that we have strong "freshness" requirement on
the 1 billion pages and need to revisit every page once every
week, not once every month.
This new scenario requires
approximately 140 Mbps for page download, and we need to
run 14
C-proc's. In this case, the coverage of the overall
crawler decreases to less than 0.5 according to Figure 4. Of
course, the coverage could be larger than our conservative
estimate, but to be safe one would probably want to consider
using a crawler mode different than the firewall mode.
CROSS-OVER MODE AND OVERLAP
In this section, we study the effectiveness of a cross-over
mode crawler (Section 3, Item 2). A cross-over crawler may
yield improved coverage of the Web, since it follows inter-partition
links when a
C-proc runs out of URLs in its own
partition. However, this mode incurs overlap in downloaded
pages (Section 4, Item 1), because a page can be downloaded
by multiple
C-proc's. Therefore, the crawler increases its
coverage at the expense of overlap in the downloaded pages.
In Figure 6, we show the relationship between the coverage
and the overlap of a cross-over mode crawler obtained from
the following experiments. We partitioned the 40M pages
using site-hash partitioning and assigned them to n
C-proc's.
Each of the n
C-proc's then was given 5 random seed URLs
from its partition and followed links in the cross-over mode.
During this experiment, we measured how much overlap the
overall crawler incurred when its coverage reached various
points. The horizontal axis in the graph shows the coverage
at a particular time and the vertical axis shows the overlap
at the given coverage. We performed the experiments for
n = 2, 4, . . . , 64.
Note that in most cases the overlap stays at zero until the
coverage becomes relatively large. For example, when n =
2
4
8
16
32
64
0.5
1
1.5
2
2.5
3
URL Hash
Site Hash
Communication overhead
n
Number of C-proc's
Figure 7: Number of crawling processes vs. Number
of URLs exchanged per page
16, the overlap is zero until coverage reaches 0.5. We can understand
this result by looking at the graph in Figure 4. According
to that graph, a crawler with 16
C-proc's can cover
around 50% of the Web by following only intra-partition
links. Therefore, even a cross-over mode crawler will follow
only intra-partition links until its coverage reaches that
point. Only after that, each
C-proc starts to follow inter-partition
links, thus increasing the overlap. For this reason,
we believe that the overlap would have been much worse in
the beginning of the crawl, if we adopted the independent
model (Section 2). By applying the partitioning scheme to
C-proc's, we make each C-proc stay in its own partition in
the beginning and suppress the overlap as long as possible.
While the crawler in the cross-over mode is much better
than one based on the independent model, it is clear that the
cross-over crawler still incurs quite significant overlap. For
example, when 4
C-proc's run in parallel in the cross-over
mode, the overlap becomes almost 2.5 to obtain coverage
close to 1. For this reason, we do not recommend the crossover
mode, unless it is absolutely necessary to download
every page without any communication between
C-proc's.
EXCHANGE MODE AND COMMUNICATION
To avoid the overlap and coverage problems, an exchange
mode crawler (Section 3, Item 3) constantly exchanges inter-partition
URLs between
C-proc's. In this section, we study
the communication overhead (Section 4, Item 4) of an exchange
mode crawler and how much we can reduce it by
replicating the most popular k URLs. For now, let us assume
that a
C-proc immediately transfers inter-partition URLs.
(We will discuss batch communication later when we discuss
the quality of a parallel crawler.)
In the experiments, again, we split the 40 million pages
into n partitions based on site-hash values and ran n
C-proc's
in the exchange mode. At the end of the crawl, we
measured how many URLs had been exchanged during the
crawl. We show the results in Figure 7. In the figure, the
horizontal axis represents the number of parallel
C-proc's,
n, and the vertical axis shows the communication overhead
(the average number of URLs transferred per page). For
comparison purposes, the figure also shows the overhead for
a URL-hash based scheme, although the curve is clipped at
the top because of its large overhead values.
To explain the graph, we first note that an average page
has 10 out-links, and about 9 of them point to pages in
132
the same site. Therefore, the 9 links are internally followed
by a
C-proc under site-hash partitioning. Only the remaining
1 link points to a page in a different site and may be
exchanged between processes. Figure 7 indicates that this
URL exchange increases with the number of processes. For
example, the
C-proc's exchanged 0.4 URLs per page when
2 processes ran, while they exchanged 0.8 URLs per page
when 16 processes ran. Based on the graph, we draw the
following conclusions:
The site-hash based partitioning scheme significantly
reduces communication overhead, compared to the URL-hash
based scheme. We need to transfer only up to
one link per page (or 10% of the links), which is significantly
smaller than the URL-hash based scheme.
For example, when we ran 2
C-proc's using the URL-hash
based scheme the crawler exchanged 5 links per
page under the URL-hash based scheme, which was
significantly larger than 0.5 links per page under the
site-hash based scheme.
The network bandwidth used for the URL exchange is
relatively small, compared to the actual page download
bandwidth.
Under the site-hash based scheme,
at most 1 URL will be exchanged per page, which is
about 40 bytes.
7
Given that the average size of a Web
page is 10 KB, the URL exchange consumes less than
40/10K = 0.4% of the total network bandwidth.
However, the overhead of the URL exchange on the
overall system can be quite significant. The processes
need to exchange up to one message per page, and the
message has to go through the TCP/IP network stack
at the sender and the receiver. Thus it is copied to and
from kernel space twice, incurring two context switches
between the kernel and the user mode. Since these operations
pose significant overhead even if the message
size is small, the overall overhead can be important if
the processes exchange one message per every downloaded
page.
In the extended version of this paper [8], we also study how
much we can reduce this overhead by replication. In short,
our results indicate that we can get significant reduction in
communication cost (between 40% 50% reduction) when
we replication the most popular 10,000 100,000 URLs
in each
C-proc. When we replicated more URLs, the cost
reduction was not as dramatic as the first 100,000 URLs.
Thus, we recommend replicating 10,000 100,000 URLs.
QUALITY AND BATCH COMMUNICATION
As we discussed, the quality (Section 4, Item 3) of a
parallel crawler can be worse than that of a single-process
crawler, because each
C-proc may make crawling decisions
solely based on the information collected within its own partition
. We now study this quality issue. In the discussion we
also study the impact of the batch communication technique
(Section 3.1, Item 1) on quality.
Throughout the experiments in this section, we assume
that the crawler uses the number of backlinks to page p as
the importance of p, or I(p). That is, if 1000 pages on the
7
In our estimation, an average URL was about 40 bytes
long.
0
1 2
4
10 20
50 100
500 1000
0.025
0.05
0.075
0.1
0.125
0.15
0.175
0.2
Quality
Number of URL exchanges
2
8
64
Processes
x
(a) URL exchange vs. Quality
0
1
2
4
10
20
50
0.2
0.4
0.6
0.8
1.0
Number of URL exchanges
2
8
64
Processes
x
Communication overhead
(b) URL exchange vs. Communication
Figure 8: Crawlers downloaded 500K pages (1.2%
of 40M)
Web have links to page p, the importance of p is I(p) = 1000.
Clearly, there exist many other ways to define the importance
of a page, but we use this metric because it (or its
variations) is being used by some existing search engines [22,
13]. Also, note that this metric depends on the global structure
of the Web. If we use an importance metric that solely
depends on a page itself, not on the global structure of the
Web, the quality of a parallel crawler will be essentially the
same as that of a single crawler, because each
C-proc in a
parallel crawler can make good decisions based on the pages
that it has downloaded.
Under the backlink metric, each
C-proc in our experiments
counted how many backlinks a page has from the downloaded
pages and visited the page with the most backlinks
first. Remember that the
C-proc's need to periodically exchange
messages to inform others of the inter-partition backlinks
. Depending on how often they exchange messages, the
quality of the downloaded pages will differ. For example,
if
C-proc's never exchange messages, the quality will be the
same as that of a firewall mode crawler, and if they exchange
messages after every downloaded page, the quality will be
similar to that of a single-process crawler.
To study these issues, we compared the quality of the
downloaded pages when
C-proc's exchanged backlink messages
at various intervals and we show the results in Figures
8(a), 9(a) and 10(a). Each graph shows the quality
achieved by the overall crawler when it downloaded a total of
500K, 2M, and 8M pages, respectively. The horizontal axis
in the graphs represents the total number of URL exchanges
during a crawl, x, and the vertical axis shows the quality
for the given experiment. For example, when x = 1, the
C-proc's exchanged backlink count information only once in
the middle of the crawl. Therefore, the case when x = 0 represents
the quality of a firewall mode crawler, and the case
133
0
1 2
4
10 20
50 100
500 1000
0.05
0.1
0.15
0.2
0.25
Number of URL exchanges
Quality
2
8
64
Processes
x
(a) URL exchange vs. Quality
0
1 2 4
10 20
50 100
5001000
0.25
0.5
0.75
1
2
8
64
Communication overhead
x
Number of URL exchanges
Processes
(b) URL exchange vs. Communication
Figure 9: Crawlers downloaded 2M pages (5% of
40M)
when x
shows the quality of a single-process crawler.
In Figures 8(b), 9(b) and 10(b), we also show the communication
overhead (Section 4, Item 4); that is, the average
number of [URL, backlink count] pairs exchanged per a
downloaded page.
From these figures, we can observe the following trends:
As the number of crawling processes increases, the quality
of downloaded pages becomes worse, unless they exchange
backlink messages often. For example, in Figure
8(a), the quality achieved by a 2-process crawler
(0.12) is significantly higher than that of a 64-process
crawler (0.025) in the firewall mode (x = 0). Again,
this result is because each
C-proc learns less about
the global backlink counts when the Web is split into
smaller parts.
The quality of the firewall mode crawler (x = 0 ) is significantly
worse than that of the single-process crawler
(x
) when the crawler downloads a relatively small
fraction of the pages (Figures 8(a) and 9(a)). However
, the difference is not very significant when the
crawler
downloads
a
relatively
large
fraction
(Figure 10(a)). In other experiments, when the crawler
downloaded more than 50% of the pages, the difference
was almost negligible in any case. (Due to space limitations
, we do not show the graphs.) Intuitively, this
result makes sense because quality is an important issue
only when the crawler downloads a small portion
of the Web. (If the crawler will visit all pages anyway,
quality is not relevant.)
The communication overhead does not increase lin-early
as the number of URL exchange increases. The
graphs in Figures 8(b), 9(b) and 10(b) are not straight
lines. This result is because a popular URL appears
0
1 2
4
10 20
50 100
500 1000
0.1
0.2
0.3
0.4
0.5
0.6
Number of URL exchanges
Quality
2
8
64
Processes
x
(a) URL exchange vs. Quality
0
1 2 4
10 20
50 100
500 1000
0.5
0.4
0.3
0.2
0.1
2
8
64
Communication overhead
Processes
x
Number of URL exchanges
(b) URL exchange vs. Communication
Figure 10: Crawlers downloaded 8M pages (20% of
40M)
multiple times between backlink exchanges.
Therefore
, a popular URL can be transferred as one entry
(URL and its backlink count) in the exchange, even if
it appeared multiple times. This reduction increases
as
C-proc's exchange backlink messages less frequently.
One does not need a large number of URL exchanges
to achieve high quality. Through multiple experiments,
we tried to identify how often
C-proc's should exchange
backlink messages to achieve the highest quality value.
From these experiments, we found that a parallel
crawler can get the highest quality values even if the
processes communicate less than 100 times during a
crawl.
We use the following example to illustrate how one can
use the results of our experiments.
Example 3. (Medium-Scale Search Engine) Say we plan
to operate a medium-scale search engine, and we want to
maintain about 20% of the Web (200 M pages) in our index.
Our plan is to refresh the index once a month. The machines
that we can use have individual T1 links (1.5 Mbps) to the
Internet.
In order to update the index once a month, we need about
6.2 Mbps download bandwidth, so we have to run at least
5
C-proc's on 5 machines. According to Figure 10(a) (20%
download case), we can achieve the highest quality if the
C-proc's
exchange backlink messages 10 times during a crawl
when 8 processes run in parallel.
(We use the 8 process
case because it is the closest number to 5.) Also, from Figure
10(b), we can see that when
C-proc's exchange messages
10 times during a crawl they need to exchange fewer than
0 .17
200M = 34M [URL, backlink count] pairs in total
. Therefore, the total network bandwidth used by the back-134
link exchange is only (34M
40 )/(200M 10K ) 0 .06 % of
the bandwidth used by actual page downloads. Also, since
the exchange happens only 10 times during a crawl, the
context-switch overhead for message transfers (discussed in
Section 8) is minimal.
Note that in this scenario we need to exchange 10 backlink
messages in one month or one message every three days.
Therefore, even if the connection between
C-proc's is unreliable
or sporadic, we can still use the exchange mode without
any problem.
CONCLUSION
Crawlers are being used more and more often to collect
Web data for search engine, caches, and data mining. As
the size of the Web grows, it becomes increasingly important
to use parallel crawlers. Unfortunately, almost nothing
is known (at least in the open literature) about options for
parallelizing crawlers and their performance. Our paper addresses
this shortcoming by presenting several architectures
and strategies for parallel crawlers, and by studying their
performance. We believe that our paper offers some useful
guidelines for crawler designers, helping them, for example,
select the right number of crawling processes, or select the
proper inter-process coordination scheme.
In summary, the main conclusions of our study were the
following:
When a small number of crawling processes run in parallel
(in our experiment, 4 or fewer), the firewall mode
provides good coverage.
Given that firewall mode
crawlers can run totally independently and are easy
to implement, we believe that it is a good option to
consider. The cases when the firewall mode might not
be appropriate are:
1. when we need to run more than 4 crawling processes
or
2. when we download only a small subset of the Web
and the quality of the downloaded pages is important
.
A crawler based on the exchange mode consumes small
network bandwidth for URL exchanges (less than 1%
of the network bandwidth). It can also minimize other
overheads by adopting the batch communication technique
. In our experiments, the crawler could maximize
the quality of the downloaded pages, even if it exchanged
backlink messages fewer than 100 times during
a crawl.
By replicating between 10,000 and 100,000 popular
URLs, we can reduce the communication overhead by
roughly 40%. Replicating more URLs does not significantly
reduce the overhead.
REFERENCES
[1] T. E. Anderson, M. D. Dahlin, J. M. Neefe, D. A.
Patterson, D. S. Roselli, and R. Y. Wang. Serverless
network file systems. In Proc. of SOSP, 1995.
[2] A. Barabasi and R. Albert. Emergence of scaling in
random networks. Science, 286(509), 1999.
[3] A. Z. Broder, S. R. Kumar, F. Maghoul, P. Raghavan,
S. Rajagopalan, R. Stata, A. Tomkins, and J. L.
Wiener. Graph structure in the web. In Proc. of
WWW Conf., 2000.
[4] M. Burner. Crawling towards eterneity: Building an
archive of the world wide web. Web Techniques
Magazine, 2(5), May 1998.
[5] S. Chakrabarti, M. van den Berg, and B. Dom.
Focused crawling: A new approach to topic-specific
web resource discovery. In Proc. of WWW Conf., 1999.
[6] J. Cho and H. Garcia-Molina. The evolution of the
web and implications for an incremental crawler. In
Proc. of VLDB Conf., 2000.
[7] J. Cho and H. Garcia-Molina. Synchronizing a
database to improve freshness. In Proc. of SIGMOD
Conf., 2000.
[8] J. Cho and H. Garcia-Molina. Parallel crawlers.
Technical report, UCLA Computer Science, 2002.
[9] J. Cho, H. Garcia-Molina, and L. Page. Efficient
crawling through URL ordering. In Proc. of WWW
Conf., 1998.
[10] E. Coffman, Jr., Z. Liu, and R. R. Weber. Optimal
robot scheduling for web search engines. Technical
report, INRIA, 1997.
[11] M. Diligenti, F. M. Coetzee, S. Lawrence, C. L. Giles,
and M. Gori. Focused crawling using context graphs.
In Proc. of VLDB Conf., 2000.
[12] D. Eichmann. The RBSE spider: Balancing effective
search against web load. In Proc. of WWW Conf.,
1994.
[13] Google Inc. http://www.google.com.
[14] A. Heydon and M. Najork. Mercator: A scalable,
extensible web crawler. Word Wide Web,
2(4):219229, December 1999.
[15] A. Heydon and M. Najork. High-performance web
crawling. Technical report, SRC Research Report, 173,
Compaq Systems Research Center, September 2001.
[16] D. Hirschberg. Parallel algorithms for the transitive
closure and the connected component problem. In
Proc. of STOC Conf., 1976.
[17] M. Koster. Robots in the web: threat or treat?
ConneXions, 4(4), April 1995.
[18] O. A. McBryan. GENVL and WWWW: Tools for
taming the web. In Proc. of WWW Conf., 1994.
[19] R. C. Miller and K. Bharat. SPHINX: a framework for
creating personal, site-specific web crawlers. In Proc.
of WWW Conf., 1998.
[20] D. Nassimi and S. Sahni. Parallel permutation and
sorting algorithms and a new generalized connection
network. Journal of ACM, 29:642667, July 1982.
[21] M. T. Ozsu and P. Valduriez. Principles of Distributed
Database Systems. Prentice Hall, 1999.
[22] L. Page and S. Brin. The anatomy of a large-scale
hypertextual web search engine. In Proc. of WWW
Conf., 1998.
[23] B. Pinkerton. Finding what people want: Experiences
with the web crawler. In Proc. of WWW Conf., 1994.
[24] Robots exclusion protocol. http://info.webcrawler.
com/mak/projects/robots/exclusion.html.
[25] A. S. Tanenbaum and R. V. Renesse. Distributed
operating systems. ACM Computing Surveys, 17(4),
December 1985.
[26] G. K. Zipf. Human Behaviour and the Principle of
Least Effort: an Introduction to Human Ecology.
Addison-Wesley, 1949.
135
| guideline;architecture;Parallelization;Web Spider;parallel crawler;Web Crawler;model evaluation |
147 | Performance Enhancing Proxy for Interactive 3G Network Gaming | Unlike non-time-critical applications like email and file transfer , network games demand timely data delivery to maintain the seemingly interactive presence of players in the virtual game world. Yet the inherently large transmission delay mean and variance of 3G cellular links make on-time game data delivery difficult. Further complicating the timely game data delivery problem is the frequent packet drops at these links due to inter-symbol interference, fading and shadowing at the physical layer. In this paper, we propose a proxy architecture that enhances the timeliness and reliability of data delivery of interactive games over 3G wireless networks. In particular, a performance enhancing proxy is designed to optimize a new time-critical data type -- variable-deadline data, where the utility of a datum is inversely proportional to the time required to deliver it. We show how a carefully designed and configured proxy can noticeably improve the delivery of network game data. | INTRODUCTION
While network gaming has long been projected to be an
application of massive economic growth, as seen in the recent
explosive development on the wired Internet in South Korea
and Japan, deployment of similar network games on 3G
wireless networks continues to be slow and difficult. One reason
is that unlike their wired counterparts, wireless links are
notoriously prone to errors due to channel fading, shadowing
and inter-symbol interference. While 3G wireless networks,
such as High Speed Downlink Packet Access (HSDPA) of
3rd Generation Partnership Project (3GPP) Release 5 (R5)
[1] and CDMA 1x EvDO of 3GPP2 [5], combat wireless
link failures at the MAC and physical layer with an elaborate
system of channel coding, retransmission, modulation
and spreading, with resulting packet loss rate being reduced
to negligible 1 to 2%, the detrimental side-effect to network
gaming is the large and often unpredictable transmission delay
mean and variance [15]. Such large and variable delays
greatly reduce the necessary interactivity of network game
players and deteriorate the overall gaming experience.
In a separate development, a new 3G network element
called IP Multimedia Subsystem (IMS) [3] has been intro-duced
in 3GPP specifications R5 and later, as shown in Figure
1. The Session Initiation Protocol (SIP)-based IMS provides
a multitude of multimedia services: from establishing
connections from the legacy telephone networks to the new
IP core network using Voice over IP (VoIP), to delivering
streaming services such as video as a value-added service
to mobile users (UE). Strategically located as a pseudo-gateway
to the private and heavily provisioned 3G networks,
it is foreseeable that IMS will continue to enlarge and enrich
its set of multimedia services in future wireless networks.
In this paper, we propose a performance enhancing proxy
(PEP) called (W)ireless (I)nteractive (N)etwork (G)aming
Proxy (WING) to improve the timely delivery of network
game data in 3G wireless networks. WING is located inside
IMS as an application service on top of the myriad of
207
services that IMS already provides. In a nutshell, WING improves
the delivery of game data from the game server to 3G
wireless game players
1
using the following three techniques.
First, by virtue of locating at the intersection of the private
wireless network and the open Internet, connection from the
game server to the wireless game player can be strategi-cally
split; for the server-WING connection, only the statis-tically
stable and fast round-trip time (RTT) and low wired-network
-only packet loss rate (PLR) are used for congestion
control, resulting in a steady yet TCP-friendly server-WING
connection. Second, by configuring parameters in the radio
link layer (RLC) specifically for gaming during session setup,
excessive RLC retransmissions are avoided, and timeliness
of game data is improved at the controlled expense of in-creased
packet losses.
Finally, by constructing small but
error-resilient packets that contain location data, packets
can be transmitted in fewer MAC-layer protocol data units
(PDU) and hence further reduces delay.
The paper is organized as follows. Related work is presented
in Section 2. We overview the 3G wireless system in
focus, HSDPA of 3GPP R5, in Section 3. Note that because
similar link and MAC layer transport optimizations that
chiefly affect delay mean and variance are also employed in
other 3G networks, our proposed WING can conceivably be
applied to other wireless networks such as CDMA 1x EvDO
of 3GPP2. We discuss the design of WING in details in
Section 4. Finally, experimental results and conclusion are
provided in Section 5 and 6, respectively.
RELATED WORK
We divide the discussion on the large volume of related
work into two section. Section 2.1 discusses related research
on wireless transport optimization.
Section 2.2 discusses
related research in transport of network game data.
2.1
Wireless Transport Optimization
We note that proxy-based transport optimization for last-hop
wireless networks has a long history, with the majority
of the research [4, 15] focusing on optimization of TCP over
last-hop wireless networks. In particular, [15] showed that
while 3G network packet losses can indeed be successfully
overcome by using ample link layer retransmissions, the resulting
large RTT mean and variance may severely affect the
performance of a TCP-like congestion avoidance rate control
that is based on end-to-end observable statistics of RTT and
PLR. The limiting rate constraint and undesirable fluctuations
can be alleviated using a proxy with split-connection
-- a theme we develop in Section 4.2.
Recently, efforts on proxy design have shifted to delay-sensitive
multimedia transport [18, 13, 8, 9], though all of
them focused exclusively on streaming media, while we focus
on network gaming. Note that due to cited complexity
reason, a competing end-to-end approach for rate control
that does not rely on an intermediate proxy is popular as
well [17, 6]. However, we chose the proxy-based approach
and will juxtapose its advantages in Section 4.2.
1
While peer-to-peer model for interactive network games
is also possible, we assume the more common server-client
model where the game server maintains and disseminates all
game states in this paper.
2.2
Transport of Network Game Data
In [3], a general gaming platform for IMS that provides
network services needed for network gaming such as session
setup and registration is proposed to ease deployment over
3G networks. Our work is orthogonal to [3] since we focus
only on the efficient transport of game data.
An early work on gaming protocol is [10], which defined
a Game Transport Protocol (GTP) for massive multi-player
on-line games (MMPOGs) over the Internet. Our proposed
gaming proxy WING differs in the following respects: i) we
design WING specifically for lossy, bandwidth-limited networks
, hence focusing on design of network-optimized differential
coding to produce small but loss-resilient packets;
and, ii) we tailor WING for HSDPA of 3G wireless networks,
optimizing performance by intelligently configuring parameters
of the RLC layer.
The most similar related work is [12], which proposed an
end-to-end adaptive FEC and dynamic packetization algorithm
to combat packet losses due to wireless link failures
and reduce packet sizes. Unlike [12], our approach is proxy-based
, and we tailor our gaming optimization exclusively for
3G networks.
OVERVIEW OF UMTS RELEASE 5
HSDPA of UMTS Release 5, also known as 3.5G, improves
upon Release 4 with numerous lower-layer optimizations.
First, a shared channel is periodically scheduled to users
in the cell with good observable network conditions to take
advantage of user diversity during fading without sacrificing
fairness. Second, an elaborate MAC-layer scheme chooses an
appropriate combination of FEC, hybrid ARQ, modulation
and spreading based on client observable network state. In
this section, we instead focus on the RLC layer, where the
user has limited control over behavior using configuration of
parameters during session setup.
The Radio Link control (RLC) layer [1] buffers upper layer
service data units (SDU) on a per-session basis -- IP packets
in this case, and segments each SDU into smaller protocol
data units (PDU) of size S
P DU
and await transmission at
lower layers. There are three transmission modes: transparent
mode (TM), unacknowledged mode (UM) and acknowledged
mode (AM). Only AM performs link-layer retransmissions
if transmission in the lower layer fails. For error
resiliency, we focus only on AM. In particular, we look at
how SDUs are discarded in the RLC layer: using a method
of retransmission-based discard (RBD), an SDU can be discarded
before successful transmission. In a nutshell, an SDU
is discarded if a predefined maximum number of retransmissions
B has been reached before successful transmission of
a PDU belonging to the SDU. We will investigate how the
value B can be selected to trade off error resiliency with
delay in Section 4.3.
WING FOR WIRELESS INTERACTIVE NETWORK GAMING
Before we discuss the three optimizations of our proposed
gaming proxy WING in details in Section 4.2, 4.3 and 4.4,
we first define a new type of transport data called variable
deadline data in Section 4.1 -- a consequence of a prediction
procedure used at a network game client to predict locations
of other game players in the virtual game world.
208
0
100
200
300
400
500
1
2
3
4
5
6
delay in ms
distortion
distortion vs. delay for dead-reckoning
random walk
weighted random walk
0
100
200
300
400
500
0.2
0.4
0.6
0.8
1
delay in ms
utility
utility vs. delay for dead-reckoning
random walk
weighted random walk
a) distortion vs. delay
b) utility vs. delay
Figure 2: Examples of Dead-Reckoning
4.1
Variable Deadline Data Delivery
Unlike media streaming applications where a data unit
containing media data is fully consumed if it is correctly delivered
by a playback deadline and useless otherwise [9], the
usefulness (utility) of a game datum is inversely proportional
to the time it requires to deliver it. This relationship between
utility and transmission delay is the behavioral result
of a commonly used game view reconstruction procedure at
a game client called dead-reckoning [2]. It works as follows.
To maintain time-synchronized virtual world views among
game players at time t
0
, a player P
A
predicts the location
t
0
of another player P
B
and draws it in P
A
's virtual world
at time t
0
, extrapolating from previously received location
updates of P
B
in the past,
, < t
0
. When location update
t
0
arrives at P
A
from P
B
at a later time t
1
, P
A
updates
its record of P
B
's locations with (t
0
,
t
0
), in order to make
an accurate prediction of (t
1
,
t
1
) for display in P
A
's virtual
world at time t
1
. Regardless of what prediction method is
used at the client, it is clear that a smaller transmission delay
will in general induce a smaller prediction error. We term
this type of data with inversely proportional relationship between
quantifiable utility and delay variable deadline data.
We next show examples of how such utility-delay curve u(d)
can be derived in practice given a player movement model
and a prediction method.
4.1.1
Examples of Dead-Reckoning
We first consider two simple movement models that model
a game player in two-dimensional space (x, y). The first is
random walk, where for each time increment t, probability
mass function (pmf) of random variable of x-coordinate x
t
,
p(x
t
), is defined as follows:
p(x
t+1
= x
t
+ 1)
=
1/3
p(x
t+1
= x
t
)
=
1/3
p(x
t+1
= x
t
- 1) = 1/3
(1)
Random variable of y-coordinate y
t
is calculated similarly
and is independent of x
t
.
The second movement model is weighted random walk,
whose pmf is defined as follows:
p (x
t+1
= x
t
+ ((x
t
- x
t-1
+ 1) mod 2))
=
1/6
p (x
t+1
= x
t
+ (x
t
- x
t-1
))
=
2/3
p (x
t+1
= x
t
+ ((x
t
- x
t-1
- 1) mod 2)) = 1/6 (2)
In words, the player continues the same movement as done
in the previous instant with probability 2/3, and changes
to one of two other movements each with probability 1/6.
Random variable y-coordinate y
t
is calculated similarly.
We defined a simple prediction method called 0th-order
prediction as follows: each unknown x
t
is simply set to the
most recently updated x
. Using each of the two movement
models in combination with the prediction method, we constructed
distortion-delay curves experimentally as shown in
Figure 2a. As seen, 0th-order prediction is a better match to
random walk than weighted random walk, inducing a smaller
distortion for all delay values. Utility u(d) -- shown in Figure
2b is simply the reciprocal of distortion. Having derived
u(d) gives us a quantifiable metric on which we can
objectively evaluate game data transport systems.
4.2
Proxy-based Congestion Control
We argue that by locating WING between the open wired
Internet and the provisioned wireless networks to conduct
split-connection data transfer, stable TCP-friendly congestion
control can be maintained on top of UDP in the wired
server-WING connection. Traditional congestion control algorithms
like TCP-friendly Rate Control (TFRC) [11] space
outgoing packets with interval T
cc
as a function of estimated
packet loss rate (PLR)
cc
, RTT mean m
cc
and RTT variance
2
cc
due to wired network congestion:
T
cc
= m
cc
p2
cc
/3 + 3(m
cc
+ 4
2
cc
)
cc
`1 + 32
2
cc
p3
cc
/8
(3)
Past end-to-end efforts [17, 6] have focused on methodologies
to distinguish wired network congestion losses from
wireless link losses, in order to avoid unnecessary rate reduction
due to erroneous perception of wireless losses as
network congestion. Split connection offers the same effect
regarding PLR by completely shielding sender from packet
losses due to wireless link failures. Moreover, by performing
TFRC (3) in the server-WING connection using only stable
wired network statistics, split connection shields the server-WING
connection from large rate fluctuations due to large
RTT variance in the last-hop 3G link as shown in [15]. For
this reason, [15] showed experimentally that indeed proxy-based
split-connection congestion control performs better
than end-to-end counterparts, even in negligible wireless loss
environments.
Lastly, we note that split connection can benefit from a
rate-mismatch environment [8, 9], where the available bandwidth
R
1
in the server-WING connection is larger than the
available bandwidth R
2
in the WING-client connection. In
such case, the surplus bandwidth R
1
- R
2
can b e used for
redundancy packets like forward-error correction (FEC) or
retransmission to lower PLR in the server-WING connection
. We refer interested readers to [8, 9] for further details.
4.3
Optimizing RLC Configuration
Given utility-delay function u(d) in Section 4.1, we optimize
configuration of RLC to maximize utility. More pre-cisely
, we pick the value of maximum retransmission limit B
-- inducing expected SDU loss rate l
and delay d
, so that
the expected utility (1
- l
)u(d
) is maximized.
We assume a known average SDU size S
SDU
, PDU loss
rate
P DU
, and probability density function (pdf) of PDU
transmission delay () with mean m
and variance
2
.
First, the expected number of PDUs fragmented from an
SDU is N =
l
S
SDU
S
P DU
m
. For a given B, the expected SDU
209
loss rate l
SDU
can b e written simply:
P
P DU
=
B
X
i=1
i-1
P DU
(1
P
DU
)
(4)
l
SDU
=
1
- P
N
P DU
(5)
where P
P DU
is the probability that a PDU is successfully
delivered given B.
The delay d
SDU
experienced by a successfully delivered
SDU is the sum of queuing delay d
q
SDU
and transmission
delay d
t
SDU
. Queuing delay d
q
SDU
is the delay experienced
by an SDU while waiting for head-of-queue SDUs to clear
due to early termination or delivery success. d
t
SDU
is the expected
wireless medium transmission delay given the SDU is
successfully delivered. d
t
SDU
is easier and can be calculated
as:
X
P DU
=
1
P
P DU
B
X
i=1
i
i-1
P DU
(1
P
DU
)
(6)
d
t
SDU
=
N m
X
P DU
(7)
where X
P DU
is the expected number of PDU (re)transmissions
given PDU delivery success. To calculate d
q
SDU
, we assume
a M/G/1 queue
2
with arrival rate
q
, mean service time m
q
,
and variance of service time
2
q
. Using Pollaczek-Khinchin
mean value formula [14], d
q
SDU
can b e written as:
d
q
SDU
=
q
m
q
`1 +
2
q
/m
2
q
2 (1
q
m
q
)
m
q
(8)
In our application,
q
is the rate at which game data arrive
at WING from the server, which we assume to be known.
m
q
is the mean service rate for both cases of SDU delivery
success and failure and can be derived as follows:
Y
SDU
=
1
l
SDU
N
X
i=1
(B + (i - 1)X
P DU
)
B
P DU
P
i-1
P DU
(9)
m
q
=
(1
- l
SDU
) d
t
SDU
+ l
SDU
m
Y
SDU
(10)
where Y
SDU
is the expected total number of PDU
(re)transmissions in an SDU given SDU delivery failure.
Similar analysis will show that the variance of service rate
2
q
for our application is:
2
q
= (1
- l
SDU
) N
2
X
2
P DU
2
+ l
SDU
Y
2
SDU
2
(11)
We can now evaluate expected queuing delay d
q
SDU
, from
which we evaluate expected delay d
SDU
. Optimal B
is one
that maximizes (1
- l
SDU
) u(d
SDU
).
4.4
Loss-optimized Differential Coding
If the location data -- player position updates sent to
improve dead-reckoning discussed in Section 4.1 -- are in
absolute values, then the size of the packet containing the
data can be large, resulting in large delay due to many PDU
fragmentation and spreading. The alternative is to describe
the location in relative terms -- the difference in the location
from a previous time slot. Differential values are smaller, resulting
in fewer encoded bits and smaller packets, and hence
2
Our system is actually more similar to a D/G/1 queue,
since the arrivals of game data are more likely to be deter-ministic
than Markovian. Instead, we use M/G/1 queue as
a first-order approximation.
1
2
3
4
ACK boundary
2
3
4
1
Figure 3: Example of Differential Coding
mode
mode marker
ref. size
coord. size
total
0
00
0
32
2 + 64n
1
01
0
16
2 + 32n
2
10
2
8
4 + 16n
3
11
4
4
6 + 8n
Table 1: Differential Coding Modes
smaller transmission delay. This differential coding of location
data is used today in networked games.
The obvious disadvantage of differential coding is that the
created dependency chain is vulnerable to network loss; a
single loss can result in error propagation until the next
absolute location data (refresh).
To lessen the error propagation effect while maintaining
the coding benefit of differential coding, one can reference a
position in an earlier time slot. An example is shown in Figure
3, where we see position 3 (
3
) references
1
instead of
2
. This way, loss of packet containing
2
will not affect
3
,
which depends only on
1
. The problem is then: for a new
position
t
, how to select reference position
t-r
for differential
coding such that the right tradeoff of error resilience and
packet size can be selected? This selection must be done in
an on-line manner as new position becomes available from
the application to avoid additional delay.
4.4.1
Specifying Coding Modes
To implement loss-optimized differential coding, we first
define a coding specification that dictates how the receiver
should decode location packets. For simplicity, we propose
only four coding modes, where each mode is specified by a
designated bit sequence (mode marker) in the packet. Assuming
the original absolute position is specified by two
32-bit fixed point numbers, mode 0 encodes the unaltered
absolute position in x-y order, resulting in data payload size
of 2 + 64n bits for n game entities. Mode 1 uses the previous
position as reference for differential encoding with 16
bits per coordinate, resulting in 2 + 32n bits for n entities.
Mode 2 uses the first 2 bits to specify r in reference position
t - r for differential encoding. Each coordinate takes 8 bits,
resulting in 4 + 16n total bits for n entities. Mode 3 is similar
to mode 2 with the exception that each of the reference
marker and the two coordinate takes only 4 bits to encode,
resulting in 6 + 8n bits for n entities.
For given position
t
= (x
t
, y
t
) and reference
t-r
=
(x
t-r
, y
t-r
), some modes may be infeasible due to the fixed
coding bit budgets for reference and coordinate sizes. So
limited to the set of feasible modes, we seek a reference position
/ mode pair that maximizes an objective function.
210
PLR
RTT mean
RTT variance
Tokyo-Singapore (50)
0
94.125ms
178.46
Tokyo-Singapore (100)
0
95.131ms
385.30
Tokyo-Singapore (200)
0
96.921ms
445.63
HSDPA (50)
0
62.232ms
7956.8
HSDPA (100)
0
72.547ms
25084
HSDPA (200)
0
152.448ms
143390
Table 2: Comparison of Network Statistics
4.4.2
Finding Optimal Coding Modes
For an IP packet of size s
t
containing position
t
that
is sent at time t, we first define the probability that it is
correctly delivered b y time as
t
( ).
t
( ) depends on
expected PLR l(s
t
) and delay d(s
t
), resulting from retransmission
limit B chosen in Section 4.3:
N(s
t
)
=
s
t
S
P DU
i
(12)
l(s
t
)
=
1
- (P
P DU
)
N(s
t
)
(13)
d(s
t
)
=
d
q
SDU
+ N(s
t
) m
X
P DU
(14)
where N(s
t
) is the number of PDUs fragmented from an
SDU of size s
t
.
l(s
t
) is PLR in (5) generalized to SDU
size s
t
. d(s
t
) is the expected queuing delay in (8) plus the
transmission delay in (7) generalized to SDU size s
t
. We
can now approximate
t
( ) as:
t
( )
1
if ACKed b y
(1
- l(s
t
))1 ( - t - d(s
t
))
o.w.
(15)
where 1(x) = 1 if x 0, and = 0 otherwise. If no acknowledgment
packets (ACK) are sent from client to WING, then
t
( ) is simply the second case in (15).
We next define the probability that position
t
is correctly
decoded b y time as P
t
( ). Due to dependencies resulting
from differential coding, P
t
( ) is written as follows:
P
t
( ) =
t
( ) Y
jt
j
( )
(16)
where j i is the set of positions j's that precedes t in the
dependency graph due to differential coding.
Given utility function u(d) in Section 4.1 and decode probability
(16), the optimal reference position / mode pair is one
that maximizes the following objective function:
max P
t
(t + d(s
t
)) u(d(s
t
))
(17)
EXPERIMENTATION
We first present network statistics for HSDPA and discuss
the implications.
We collected network statistics of
10,000 ping packets, of packet size 50, 100 and 200 bytes,
spaced 200ms apart, between hosts in Tokyo and Singapore
inside HP intranet. The results are shown in Table 2. We
then conducted the same experiment over a network emu-lator
called WiNe2 [16] emulating the HSDPA link with 10
competing ftp users each with mobility model Pedestrian
A. We make two observations in Table 2. One, though results
from both experiments had similar RTT means, HSDPA's
RTT variances were very large, substantiating our
claim that using split-connection to shield the server-WING
connection from HSDPA's RTT variance would drastically
improve TFRC bandwidth (3) of server-WING connection.
number of entities n
4
frame rate
10 fps
IP + UDP header
20 + 8 bytes
RLC PDU size
40 bytes
RLC PDU loss rate
0.1 to 0.3
average packet size
61 bytes
shifted Gamma parameter
g
2
shifted Gamma parameter
g
0.1
shifted Gamma parameter
g
10.0
Table 3: Simulation Parameters
1
2
3
4
5
6
7
8
9
10
50
100
150
200
250
300
350
400
retransmission limit
expected delay in ms
expected delay vs. retrans. limit
epsilon=0.1
epsilon=0.2
epsilon=0.3
1
2
3
4
5
6
7
8
9
10
0.2
0.22
0.24
0.26
0.28
0.3
0.32
0.34
0.36
0.38
retransmission limit
expected utility
expected utility vs. retrans. limit
epsilon=0.1
epsilon=0.2
epsilon=0.3
a) delay vs. retrans. limit B
b) utility vs. retrans. limit B
Figure 4: Delay andUtility vs. Retrans. Limit
Two, larger packets entailed larger RTT means for HSDPA.
This means that the differential coding discussed in Section
4.4 indeed has substantial performance improvement potential
.
We next used an internally developed network simulator
called (mu)lti-path (n)etwork (s)imulator (muns) that was
used in other simulations [7] to test RLC configurations and
differential coding. For PDU transmission delay (), we
used a shifted Gamma distribution:
() =
g
g
( g
)
g
-1
e
g
(g
)
(
g
)
,
g
< < (18)
where () is the Gamma function [14]. The parameters
used are shown in Table 3.
Figure 4 shows the expected delay and utility as a function
of retransmission limit B for different PDU loss rates. As
expected, when B increases, the expected delay increases.
The expected utility, on the other hand, reaches a peak and
decreases. For given PDU loss rate, we simply select B with
the largest expected utility.
Next, we compare the results of our loss-optimized differential
coding optimization opt in Section 4.4 with two
schemes: abs, which always encodes in absolute values; and,
rel, which uses only previous frame for differential coding
and refreshes with absolute values every 10 updates. abs
P DU
B
abs(1)
abs(
B
)
rel(1)
rel(
B
)
opt
0.10
2
1.181
1.134
1.806
1.154
1.070
0.15
3
1.222
1.166
2.288
1.108
1.073
0.20
2
1.290
1.192
2.619
1.380
1.086
0.25
2
1.356
1.232
3.035
1.568
1.090
0.30
2
1.449
1.268
3.506
1.750
1.110
0.35
2
1.509
1.300
3.556
2.054
1.121
Table 4: Distortion Comparison
211
represents the most error resilient coding method in differential
coding, while rel represents a reasonably coding-efficient
method with periodical resynchronization. Note,
however, that neither abs nor rel adapts differential coding
in real time using client feedbacks.
abs and rel were each tested twice.
In the first trial,
limit B was set to 1, and in the second, B was set to the
optimal configured value as discussed in Section 4.3. 20000
data points were generated and averaged for each distortion
value in Table 4. As we see in Table 4 for various PDU
loss rate
P DU
, the resulting distortions for opt were always
lower than abs's and rel's, particularly for high PDU loss
rates. opt performed better than rel because of opt's error
resiliency of loss-optimized differential coding, while opt
performed better than abs because opt's smaller packets in-duced
a smaller queuing delay and a smaller transmission
delay due to smaller number of RLC fragmentations. This
demonstrates that it is important not only to find an optimal
RLC configuration, but a suitable differential coding
scheme to match the resulting loss rate and delay of the
configuration.
CONCLUSION
We propose a performance enhancing proxy called WING
to improve the delivery of game data from a game server to
3G game players using three techniques: i) split-connection
TCP-friendly congestion control, ii) network game optimized
RLC configuration, and, iii) packet compression using differential
coding. For future, we will investigate how similar
techniques can be applied for the 3G uplink from game
player to game server.
ACKNOWLEDGMENTS
The authors thank other members of the multimedia systems
architecture team, Yasuhiro Araki and Takeaki Ota,
for their valuable comments and discussions.
REFERENCES
[1] Universal Mobile Telecommunications System
(UMTS); Radio Link Control (RLC) protocol
specification (3GPP TS.25.322 version 5.12.0 Release
5). http://www.3gpp.org/ftp/Specs/archive/25 series/
25.322/25322-5c0.zip, September 2005.
[2] S. Aggarwal, H. Banavar, and A. Khandelwal.
Accuracy in dead-reckoning based distributed
multi-player games. In ACM SIGCOMM NetGames,
Portland, OR, August 2004.
[3] A. Akkawi, S. Schaller, O. Wellnitz, and L. Wolf. A
mobile gaming platform for the IMS. In ACM
SIGCOMM NetGames, Portland, OR, August 2004.
[4] H. Balakrishnan, V. Padmanabhan, S. Seshan, and
R. Katz. A comparison of mechanisms for improving
TCP performance over wireless links. In IEEE/ACM
Trans. Networking, volume 5, no.6, December 1997.
[5] Q. Bi and S. Vitebsky. Performance analysis of 3G-1x
EvDO high data rate system. In IEEE Wireless
Communications and Networking Conference,
Orlando, FL, March 2002.
[6] M. Chen and A. Zakhor. AIO-TRFC: A light-weight
rate control scheme for streaming over wireless. In
IEEE WirelessCom, Maui, HI, June 2005.
[7] G. Cheung, P. Sharma, and S. J. Lee. Striping
delay-sensitive packets over multiple bursty wireless
channels. In IEEE International Conference on
Multimedia and Expo, Amsterdam, the Netherlands,
July 2005.
[8] G. Cheung and W. t. Tan. Streaming agent for wired
network / wireless link rate-mismatch environment. In
International Workshop on Multimedia Signal
Processing, St. Thomas, Virgin Islands, December
2002.
[9] G. Cheung, W. t. Tan, and T. Yoshimura. Double
feedback streaming agent for real-time delivery of
media over 3G wireless networks. In IEEE
Transactions on Multimedia, volume 6, no.2, pages
304314, April 2004.
[10] S. P. et al. Game transport protocol: A reliable
lightweight transport protocol for massively
multiplayer on-line games (MMPOGs). In
SPIE-ITCOM, Boston, MA, July 2002.
[11] S. Floyd, M. Handley, J. Padhye, and J. Widmer.
Equation-based congestion control for unicast
applications. In ACM SIGCOMM, Stockholm,
Sweden, August 2000.
[12] P. Ghosh, K. Basu, and S. Das. A cross-layer design to
improve quality of service in online multiplayer
wireless gaming networks. In IEEE Broadnets, Boston,
MA, October 2005.
[13] L. Huang, U. Horn, F. Hartung, and M. Kampmann.
Proxy-based TCP-friendly streaming over mobile
networks. In IEEE International Symposium on a
World of Wireless, Mobile and Multimedia Networks,
Atlanta, GA, September 2002.
[14] A. Leon-Garcia. Probability and Random Processes for
Electrical Engineering. Addison Wesley, 1994.
[15] M. Meyer, J. Sachs, and M. Holzke. Performance
evaluation of a TCP proxy in WCDMA networks. In
IEEE Wireless Communications, October 2003.
[16] Nomor Research GmbH. WiSe2.
http://www.nomor.de.
[17] F. Yang, Q. Zhang, W. Zhu, and Y.-Q. Zhang. Bit
allocation for scalable video streaming over mobile
wireless internet. In IEEE Infocom, Hong Kong,
March 2004.
[18] T. Yoshimura, T. Ohya, T. Kawahara, and M. Etoh.
Rate and robustness control with RTP monitoring
agent for mobile multimedia streaming. In IEEE
International Conference on Communication, New
York, NY, April 2002.
212
| Wireless Networks;3G wireless network;time critical data;Network Gaming;congestion control;loss-optimized;RLC configuration;proxy architecture |
148 | Physically-Based Visual Simulation on Graphics Hardware | In this paper, we present a method for real-time visual simulation of diverse dynamic phenomena using programmable graphics hardware. The simulations we implement use an extension of cellular automata known as the coupled map lattice (CML). CML represents the state of a dynamic system as continuous values on a discrete lattice. In our implementation we store the lattice values in a texture, and use pixel-level programming to implement simple next-state computations on lattice nodes and their neighbors. We apply these computations successively to produce interactive visual simulations of convection, reaction-diffusion, and boiling. We have built an interactive framework for building and experimenting with CML simulations running on graphics hardware, and have integrated them into interactive 3D graphics applications. | Introduction
Interactive 3D graphics environments, such as games, virtual
environments, and training and flight simulators are
becoming increasingly visually realistic, in part due to the
power of graphics hardware. However, these scenes often
lack rich dynamic phenomena, such as fluids, clouds, and
smoke, which are common to the real world.
A recent approach to the simulation of dynamic
phenomena, the coupled map lattice
[Kaneko 1993]
, uses a
set of simple local operations to model complex global
behavior. When implemented using computer graphics
hardware, coupled map lattices (CML) provide a simple, fast
and flexible method for the visual simulation of a wide
variety of dynamic systems and phenomena.
In this paper we will describe the implementation of
CML systems with current graphics hardware, and
demonstrate the flexibility and performance of these systems
by presenting several fast interactive 2D and 3D visual
simulations. Our CML boiling simulation runs at speeds
ranging from 8 iterations per second for a 128x128x128
lattice to over 1700 iterations per second for a 64x64 lattice.
Section 2 describes CML and other methods for
simulating natural phenomena. Section 3 details our
implementation of CML simulations on programmable
graphics hardware, and Section 4 describes the specific
simulations we have implemented. In Section 5 we discuss
limitations of current hardware and investigate some
solutions. Section 6 concludes.
CML and Related Work
The standard approach to simulating natural phenomena is to
solve equations that describe their global behavior. For
example, multiple techniques have been applied to solving
the Navier-Stokes fluid equations
[Fedkiw, et al. 2001;Foster
and Metaxas 1997;Stam 1999]
. While their results are
typically numerically and visually accurate, many of these
simulations require too much computation (or small lattice
sizes) to be integrated into interactive graphics applications
such as games. CML models, instead of solving for the
global behavior of a phenomenon, model the behavior by a
number of very simple local operations. When aggregated,
these local operations produce a visually accurate
approximation to the desired global behavior.
Figure 1: 3D coupled map lattice simulations running on
graphics hardware. Left: Boiling. Right: Reaction-Diffusion
.
A coupled map lattice is a mapping of continuous
dynamic state values to nodes on a lattice that interact (are
`coupled') with a set of other nodes in the lattice according
to specified rules. Coupled map lattices were developed by
Kaneko for the purpose of studying spatio-temporal
dynamics and chaos
[Kaneko 1993]
. Since their introduction,
CML techniques have been used extensively in the fields of
physics and mathematics for the simulation of a variety of
phenomena, including boiling
[Yanagita 1992]
, convection
[Yanagita and Kaneko 1993]
, cloud formation
[Yanagita and
Kaneko 1997]
, chemical reaction-diffusion
[Kapral 1993]
, and
the formation of sand ripples and dunes
[Nishimori and Ouchi
1993]
. CML techniques were recently introduced to the field
of computer graphics for the purpose of cloud modeling and
animation
[Miyazaki, et al. 2001]
. Lattice Boltzmann
computation is a similar technique that has been used for
simulating fluids, particles, and other classes of phenomena
[Qian, et al. 1996]
.
A CML is an extension of a cellular automaton (CA)
[Toffoli and Margolus 1987;von Neumann 1966;Wolfram 1984]
in which the discrete state values of CA cells are replaced
with continuous real values. Like CA, CML are discrete in
space and time and are a versatile technique for modeling a
wide variety of phenomena. Methods for animating cloud
formation using cellular automata were presented in
[Dobashi, et al. 2000;Nagel and Raschke 1992]
. Discrete-state
automata typically require very large lattices in order to
simulate real phenomena, because the discrete states must be
filtered in order to compute real values. By using
continuous-valued state, a CML is able to represent real
physical quantities at each of its nodes.
While a CML model can certainly be made both
numerically and visually accurate
[Kaneko 1993]
, our
implementation on graphics hardware introduces precision
constraints that make numerically accurate simulation
difficult. Therefore, our goal is instead to implement
visually accurate simulation models on graphics hardware, in
the hope that continuing improvement in the speed and
precision of graphics hardware will allow numerically
accurate simulation in the near future.
The systems that have been found to be most amenable to
CML implementation are multidimensional initial-value
partial differential equations. These are the governing
equations for a wide range of phenomena from fluid
dynamics to reaction-diffusion. Based on a set of initial
conditions, the simulation evolves forward in time. The only
requirement is that the equation must first be explicitly
discretized in space and time, which is a standard
requirement for conventional numerical simulation. This
flexibility means that the CML can serve as a model for a
wide class of dynamic systems.
2.1 A CML Simulation Example
To illustrate CML, we describe the boiling simulation of
[Yanagita 1992]
. The state of this simulation is the
temperature of a liquid. A heat plate warms the lower layer
of liquid, and temperature is diffused through the liquid. As
the temperature reaches a threshold, the phase changes and
"bubbles" of high temperature form. When phase changes
occur, newly formed bubbles absorb latent heat from the
liquid around them, and temperature differences cause them
to float upward under buoyant force.
Yanagita implements this global behavior using four local
CML operations; Diffusion, Phase change, Buoyancy, and
Latent heat. Each of these operations can be written as a
simple equation. Figures 1, 2 and 7 (see color pate) show this
simulation running on graphics hardware, and Section 4.1
gives details of our implementation. We will use this
simulation as an example throughout this paper.
Hardware Implementation
Graphics hardware is an efficient processor of images it
can use texture images as input, and outputs images via
rendering. Images arrays of values map well to state
values on a lattice. Two-dimensional lattices can be
represented by 2D textures, and 3D lattices by 3D textures or
collections of 2D textures. This natural correspondence, as
well as the programmability and performance of graphics
hardware, motivated our research.
3.1 Why Graphics Hardware?
Our primary reason to use graphics hardware is its speed at
imaging operations compared to a conventional CPU. The
CML models we have implemented are very fast, making
them well suited to interactive applications (See Section 4.1).
GPUs were designed as efficient coprocessors for
rendering and shading. The programmability now available
in GPUs such as the NVIDIA GeForce 3 and 4 and the ATI
Radeon 8500 makes them useful coprocessors for more
diverse applications. Since the time between new
generations of GPUs is currently much less than for CPUs,
faster coprocessors are available more often than faster
central processors. GPU performance tracks rapid
improvements in semiconductor technology more closely
than CPU performance. This is because CPUs are designed
for high performance on sequential operations, while GPUs
are optimized for the high parallelism of vertex and fragment
processing
[Lindholm, et al. 2001]
. Additional transistors can
Figure 2: A sequence of stills (10 iterations apart) from a
2D boiling simulation running on graphics hardware.
110
Harris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware
The Eurographics Association 2002.
therefore be used to greater effect in GPU architectures. In
addition, programmable GPUs are inexpensive, readily
available, easily upgradeable, and compatible with multiple
operating systems and hardware architectures.
More importantly, interactive computer graphics
applications have many components vying for processing
time. Often it is difficult to efficiently perform simulation,
rendering, and other computational tasks simultaneously
without a drop in performance. Since our intent is visual
simulation, rendering is an essential part of any solution. By
moving simulation onto the GPU that renders the results of a
simulation, we not only reduce computational load on the
main CPU, but also avoid the substantial bus traffic required
to transmit the results of a CPU simulation to the GPU for
rendering. In this way, methods of dynamic simulation on
the GPU provide an additional tool for load balancing in
complex interactive applications.
Graphics hardware also has disadvantages. The main
problems we have encountered are the difficulty of
programming the GPU and the lack of high precision
fragment operations and storage. These problems are related
programming difficulty is increased by the effort required
to ensure that precision is conserved wherever possible.
These issues should disappear with time. Higher-level
shading languages have been introduced that make hardware
graphics programming easier
[Peercy, et al. 2000;Proudfoot, et
al. 2001]
. The same or similar languages will be usable for
programming simulations on graphics hardware. We believe
that the precision of graphics hardware will continue to
increase, and with it the full power of programmability will
be realised.
3.2 General-Purpose Computation
The use of computer graphics hardware for general-purpose
computation has been an area of active research for many
years, beginning on machines like the Ikonas
[England 1978]
,
the Pixel Machine
[Potmesil and Hoffert 1989]
and Pixel-Planes
5
[Rhoades, et al. 1992]
. The wide deployment of
GPUs in the last several years has resulted in an increase in
experimental research with graphics hardware.
[Trendall and
Steward 2000]
gives a detailed summary of the types of
computation available on modern GPUs.
Within the realm of graphics applications, programmable
graphics hardware has been used for procedural texturing
and shading
[Olano and Lastra 1998; Peercy, et al. 2000;
Proudfoot, et al. 2001; Rhoades, et al. 1992]
. Graphics
hardware has also been used for volume visualization
[Cabral, et al. 1994]
. Recently, methods for using current and
near-future GPUs for ray tracing computations have been
described in
[Carr, et al. 2002]
and
[Purcell, et al. 2002]
,
respectively.
Other researchers have found ways to use graphics
hardware for non-graphics applications. The use of
rasterization hardware for robot motion planning is described
in
[Lengyel, et al. 1990]
.
[Hoff, et al. 1999]
describes the use
of z-buffer techniques for the computation of Voronoi
diagrams. The PixelFlow SIMD graphics computer
[Eyles, et
al. 1997]
was used to crack UNIX password encryption
[Kedem and Ishihara 1999]
, and graphics hardware has been
used in the computation of artificial neural networks
[Bohn
1998]
.
Our work uses CML to simulate dynamic phenomena that
can be described by PDEs. Related to this is the
visualization of flows described by PDEs, which has been
implemented using graphics hardware to accelerate line
integral convolution and Lagrangian-Eulerian advection
[Heidrich, et al. 1999; Jobard, et al. 2001; Weiskopf, et al. 2001]
.
NVIDIA has demonstrated the Game of Life cellular
automata running on their GPUs, as well as a 2D physically-based
water simulation that operates much like our CML
simulations
[NVIDIA 2001a;NVIDIA 2001b]
.
3.3 Common Operations
A detailed description of the implementation of the specific
simulations that we have modeled using CML would require
more space than we have in this paper, so we will instead
describe a few common CML operations, followed by details
of their implementation. Our goal in these descriptions is to
impart a feel for the kinds of operations that can be
performed using a graphics hardware implementation of a
CML model.
3.3.1 Diffusion and the Laplacian
The divergence of the gradient of a scalar function is called
the Laplacian
[Weisstein 1999]
:
2
2
2
2
2
( , )
.
T
T
T x y
x
y
=
+
The Laplacian is one of the most useful tools for working
with partial differential equations. It is an isotropic measure
of the second spatial derivative of a scalar function.
Intuitively, it can be used to detect regions of rapid change,
and for this reason it is commonly used for edge detection in
image processing. The discretized form of this equation is:
2
,
1,
1,
,
1
,
1
,
4
i j
i
j
i
j
i j
i j
i j
T
T
T
T
T
T
+
+
=
+
+
+
.
The Laplacian is used in all of the CML simulations that
we have implemented. If the results of the application of a
Laplacian operator at a node T
i,j
are scaled and then added to
the value of T
i,j
itself, the result is diffusion
[Weisstein 1999]
:
'
2
,
,
,
4
d
i j
i j
i j
c
T
T
T
=
+
. (1)
Here, c
d
is the coefficient of diffusion. Application of this
diffusion operation to a lattice state will cause the state to
diffuse through the lattice
1
.
3.3.2 Directional Forces
Most dynamic simulations involve the application of force.
Like all operations in a CML model, forces are applied via
1
See Appendix A for details of our diffusion implementation.
111
Harris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware
The Eurographics Association 2002.
computations on the state of a node and its neighbors. As an
example, we describe a buoyancy operator used in
convection and cloud formation simulations
[Miyazaki, et al.
2001;Yanagita and Kaneko 1993;Yanagita and Kaneko 1997]
.
This buoyancy operator uses temperature state T to
compute a buoyant velocity at a node and add it to the node's
vertical velocity state, v:
,
,
,
1,
1,
2
[2
]
b
c
i j
i j
i j
i
j
i
j
v
v
T
T
T
+
=
+
.
(2)
Equation (2) expresses that a node is buoyed upward if its
horizontal neighbors are cooler than it is, and pushed
downward if they are warmer. The strength of the buoyancy
is controlled via the parameter c
b
.
3.3.3 Computation on Neighbors
Sometimes an operation requires more complex computation
than the arithmetic of the simple buoyancy operation
described above. The buoyancy operation of the boiling
simulation described in Section 2.1 must also account for
phase change, and is therefore more complicated:
,
,
,
,
1
,
1
2
[ (
)
(
)],
( )
tanh[ (
)].
i j
i j
i j
i j
i j
c
T
T
T
T
T
T
T T
+
=
=
(3)
In Equation (3),
s is the buoyancy strength coefficient, and
(T) is an approximation of density relative to temperature,
T. The hyperbolic tangent is used to simulate the rapid
change of density of a substance around the phase change
temperature, T
c
. A change in density of a lattice node
relative to its vertical neighbors causes the temperature of the
node to be buoyed upward or downward. The thing to notice
in this equation is that simple arithmetic will not suffice the
hyperbolic tangent function must be applied to the
temperature at the neighbors above and below node (i,j). We
will discuss how we can compute arbitrary functions using
dependent texturing in Section 3.4.
3.4 State Representation and Storage
Our goal is to maintain all state and operation of our
simulations in the GPU and its associated memory. To this
end, we use the frame buffer like a register array to hold
transient state, and we use textures like main memory arrays
for state storage. Since the frame buffer and textures are
typically limited to storage of 8-bit unsigned integers, state
values must be converted to this format before being written
to texture.
Texture storage can be used for both scalar and vector
data. Because of the four color channels used in image
generation, two-, three-, or four-dimensional vectors can be
stored in each texel of an RGBA texture. If scalar data are
needed, it is often advantageous to store more than one scalar
state in a single texture by using different color channels. In
our CML implementation of the Gray-Scott reaction-diffusion
system, for example, we store the concentrations of
both reactants in the same texture. This is not only efficient
in storage but also in computation since operations that act
equivalently on both concentrations can be performed in
parallel.
Physical simulation also requires the use of signed values.
Most texture storage, however, uses unsigned fixed-point
values. Although fragment-level programmability available
in current GPUs uses signed arithmetic internally, the
unsigned data stored in the textures must be biased and
scaled before and after processing
[NVIDIA 2002]
.
3.5 Implementing CML Operations
An iteration of a CML simulation consists of successive
application of simple operations on the lattice. These
operations consist of three steps: setup the graphics hardware
rendering state, render a single quadrilateral fit to the view
port, and store the rendered results into a texture. We refer to
each of these setup-render-copy operations as a single pass.
In practice, due to limited GPU resources (number of texture
units, number of register combiners, etc.), a CML operation
may span multiple passes.
The setup portion of a pass simply sets the state of the
hardware to correctly perform the rest of the pass. To be
sure that the correct lattice nodes are sampled during the
pass, texels in the input textures must map directly to pixels
in the output of the graphics pipeline. To ensure that this is
true, we set the view port to the resolution of the lattice, and
the view frustum to an orthographic view fit to the lattice so
that there is a one-to-one mapping between pixels in the
rendering buffer and texels in the texture to be updated.
The render-copy portion of each pass performs 4
suboperations: Neighbor Sampling, Computation on
Neighbors, New State Computation, and State Update.
Figure
3 illustrates the mapping of the suboperations to
graphics hardware. Neighbor sampling and Computation on
Neighbors are performed by the programmable texture
mapping hardware. New State Computation performs
arithmetic on the results of the previous suboperations using
programmable texture blending. Finally, State Update feeds
the results of one pass to the next by rendering or copying
the texture blending results to a texture.
Neighbor Sampling: Since state is stored in textures,
neighbor sampling is performed by offsetting texture
coordinates toward the neighbors of the texel being updated.
For example, to sample the four nearest neighbor nodes of
Figure 3: Components of a CML operation map to
graphics hardware pipeline components.
112
Harris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware
The Eurographics Association 2002.
node (x,y), the texture coordinates at the corners of the
quadrilateral mentioned above are offset in the direction of
each neighbor by the width of a single texel. Texture
coordinate interpolation ensures that as rasterization
proceeds, every texel's neighbors will be correctly sampled.
Note that beyond sampling just the nearest neighbors of a
node, weighted averages of nearby nodes can be computed
by exploiting the linear texture interpolation hardware
available in GPUs. An example of this is our single-pass
implementation of 2D diffusion, described in Appendix A.
Care must be taken, though, since the precision used for
the interpolation coefficients is sometimes lower than the rest
of the texture pipeline.
Computation on Neighbors: As described in Section
3.3.3, many simulations compute complex functions of the
neighbors they sample. In many cases, these functions can
be computed ahead of time and stored in a texture for use as
a lookup table. The programmable texture shader
functionality of recent GPUs provides several dependent
texture addressing operations. We have implemented table
lookups using the "DEPENDENT_GB_TEXTURE_
2D_NV" texture shader of the GeForce 3. This shader
provides memory indirect texture addressing the green and
blue colors read from one texture unit are used as texture
coordinates for a lookup into a second texture unit. By
binding the precomputed lookup table texture to the second
texture unit, we can implement arbitrary function operations
on the values of the nodes (Figure 4).
New State Computation: Once we have sampled the
values of neighboring texels and optionally used them for
function table lookups, we need to compute the new state of
the lattice. We use programmable hardware texture blending
to perform arithmetic operations including addition,
multiplication, and dot products. On the GeForce 3 and 4,
we implement this using register combiners
[NVIDIA 2002]
Register combiners take the output of texture shaders and
rasterization as input, and provide arithmetic operations,
user-defined constants, and temporary registers. The result
of these computations is written to the frame buffer.
State Update: Once the new state is computed, we must
store it in a state texture. In our current implementation, we
copy the newly-rendered frame buffer to a texture using the
glCopyTexSubImage2D() instruction in OpenGL. Since all
simulation state is stored in textures, our technique avoids
large data transfers between the CPU and GPU during
simulation and rendering.
3.6 Numerical Range of CML Simulations
The physically based nature of CML simulations means that
the ranges of state values for different simulations can vary
widely. The graphics hardware we use to implement them,
on the other hand, operates only on fixed-point fragment
values in the range [0,1]. This means that we must
normalize the range of a simulation into [0,1] before it can be
implemented in graphics hardware.
Because the hardware uses limited-precision fixed-point
numbers, some simulations will be more robust to this
normalization than others. The robustness of a simulation
depends on several factors. Dynamic range is the ratio
between a simulation's largest absolute value and its smallest
non-zero absolute value. If a simulation has a high dynamic
range, it may not be robust to normalization unless the
precision of computation is high enough to represent the
dynamic range. We refer to a simulation's resolution as the
smallest absolute numerical difference that it must be able to
discern. A simulation with a resolution finer than the
resolution of the numbers used in its computation will not be
robust. Finally, as the arithmetic complexity of a simulation
increases, it will incur more roundoff error, which may
reduce its robustness when using low-precision arithmetic.
For example, the boiling simulation (Section 4.1) has a
range of approximately [0,10], but its values do not get very
close to zero, so its dynamic range is less than ten. Also, its
resolution is fairly coarse, since the event to which it is most
sensitive phase change is near the top of its range. For
these reasons, boiling is fairly robust under normalization.
Reaction-diffusion has a range of [0,1] so it does not require
normalization. Its dynamic range, however, is on the order
of 10
5
, which is much higher than that of the 8-bit
numbers
stored in textures. Fortunately, by scaling the coefficients of
reaction-diffusion, we can reduce this dynamic range
somewhat to get interesting results. However, as we
describe in Section 4.3, it suffers from precision errors (See
Section 5.1 for more discussion of precision issues). As
more precision becomes available in graphics hardware,
normalization will become less of an issue. When floating
point computation is made available, simulations can be run
within their natural ranges.
Results
We have designed and built an interactive framework,
"CMLlab", for constructing and experimenting with CML
simulations (Figure 5). The user constructs a simulation
from a set of general purpose operations, such as diffusion
and advection, or special purpose operations designed for
specific simulations, such as the buoyancy operations
described in Section 3.3. Each operation processes a set of
input textures and produces a single output texture. The user
connects the outputs and inputs of the selected operations
into a directed acyclic graph. An iteration of the simulation
consists of traversing the graph in depth-first fashion so that
each operation is performed in order. The state textures
resulting from an iteration are used as input state for the next
iteration, and for displaying the simulated system. The
Figure 4: Arbitrary function lookups are implemented
using dependent texturing in graphics hardware.
113
Harris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware
The Eurographics Association 2002.
results of intermediate passes in a simulation iteration can be
displayed to the user in place of the result textures. This is
useful for visually debugging the operation of a new
simulation.
While 2D simulations in our framework use only 2D
textures for storage of lattice state, 3D simulations can be
implemented in two ways. The obvious way is to use 3D
textures. However, the poor performance of copying to 3D
textures in current driver implementations would make our
simulations run much slower. Instead, we implement 3D
simulations using a collection of 2D slices to represent the
3D volume. This has disadvantages over using true 3D
textures. For example, we must implement linear filtering
and texture boundary conditions (clamp or repeat) in
software, wheras 3D texture functionality provides these in
hardware.
It is worth noting that we trade optimal performance for
flexibility in the CMLLab framework. Because we want to
allow a variety of models to be built from a set of operations,
we often incur the expense of some extra texture copies in
order to keep operations separate. Thus, our implementation
is not optimal even faster rates are achievable on the same
hardware by sacrificing operator reuse.
To demonstrate the utility of hardware CML simulation
in interactive 3D graphics applications, we have integrated
the simulation system into a virtual environment built on a
3D game engine, "Wild Magic"
[Eberly 2001]
. Figure 7 (see
color plate) is an image of a boiling witch's brew captured
from a real-time demo we built with the engine. The demo
uses our 3D boiling simulation (Section 4.1) and runs at 45
frames per second.
We will now describe three of the CML simulations that
we have implemented. The test computer we used is a PC
with a single 2.0 GHz Pentium 4 processor and 512 MB of
RAM. Tests were performed on this machine with both an
NVIDIA GeForce 3 Ti 500 GPU with 64 MB of RAM, and
an NVIDIA GeForce 4 Ti 4600 GPU with 128 MB of RAM.
4.1 Boiling
We have implemented 2D and 3D boiling simulations as
described in
[Yanagita 1992]
. Rather than simulate all
components of the boiling phenomenon (temperature,
pressure, velocity, phase of matter, etc.), their model
simulates only the temperature of the liquid as it boils. The
simulation is composed of successive application of thermal
diffusion, bubble formation and buoyancy, latent heat
transfer. Sections 3.3.1 and 3.3.3 described the first two of
these, and Section 2.1 gave an overview of the model. For
details of the latent heat transfer computation, we refer the
reader to
[Yanagita 1992]
. Our implementation requires
seven passes per iteration for the 2D simulation, and 9 passes
per slice for the 3D simulation. Table 1 shows the
simulation speed for a range of resolutions. For details of
our boiling simulation implementation, see
[Harris 2002b]
.
4.2 Convection
The Rayleigh-Bnard convection CML model of
[Yanagita
and Kaneko 1993]
simulates convection using four CML
operations: buoyancy (described in 3.3.2), thermal diffusion,
temperature and velocity advection, and viscosity and
pressure effect. The viscosity and pressure effect is
implemented as
2
grad(div )
4
v
p
k
v
v
v k
v
= +
+
,
where
v
is the velocity, k
v
is the viscosity ratio and k
p
is the
coefficient of the pressure effect. The first two terms of this
equation account for diffusion of the velocity, and the last
term is the flow caused by the gradient of the mass flow
around the lattice
[Miyazaki, et al. 2001]
. See
[Miyazaki, et al.
2001;Yanagita and Kaneko 1993]
for details of the discrete
implementation of this operation.
The remaining operation is advection of temperature and
velocity by the velocity field.
[Yanagita and Kaneko 1993]
implements this by distributing state from a node to its
neighbors according to the velocity at the node. In our
implementation, this was made difficult by the precision
Iterations Per Second
Resolution
Software GeForce 3 GeForce 4 Speedup
64x64
266.5
1252.9
1752.5
4.7 / 6.6
128x128
61.8
679.0
926.6 11.0 / 15.0
256x256
13.9
221.3
286.6 15.9 / 20.6
512x512
3.3
61.2
82.3 18.5 / 24.9
1024x1024
.9
15.5
21.6
17.2 / 24
32x32x32
25.5
104.3
145.8
4.1 / 5.7
64x64x64
3.2
37.2
61.8 11.6 / 19.3
128x128x128
.4
NA
8.3 NA / 20.8
Table 1: A speed comparison of our hardware CML
boiling simulation to a software version. The speedup
column gives the speedup for both GeForce 3 and 4.
Figure 5: CMLlab, our interactive framework for building
and experimenting with CML simulations.
114
Harris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware
The Eurographics Association 2002.
limitations of the hardware, so we used a texture shader-based
advection operation instead. This operation advects
state stored in a texture using the GL_OFFSET_TEXTURE_
2D_NV dependent texture addressing mode of the GeForce 3
and 4. A description of this method can be found in
[Weiskopf, et al. 2001]
. Our 2D convection implementation
(Figure 8 in the color plate section) requires 10 passes per
iteration. We have not implemented a 3D convection
simulation because GeForce 3 and 4 do not have a 3D
equivalent of the offset texture operation.
Due to the precision limitations of the graphics hardware,
our implementation of convection did not behave exactly as
described by
[Yanagita and Kaneko 1993]
. We do observe the
formation of convective rolls, but the motion of both the
temperature and velocity fields is quite turbulent. We
believe that this is a result of low-precision arithmetic.
4.3 Reaction-Diffusion
Reaction-Diffusion processes were proposed by
[Turing
1952]
and introduced to computer graphics by
[Turk
1991;Witkin and Kass 1991]
. They are a well-studied model
for the interaction of chemical reactants, and are interesting
due to their complex and often chaotic behavior. The
patterns that emerge are reminiscent of patterns occurring in
nature
[Lee, et al. 1993]
. We implemented the Gray-Scott
model, as described in
[Pearson 1993]
. This is a two-chemical
system defined by the initial value partial differential
equations:
2
2
2
2
(1
)
(
) ,
u
v
U
D
U UV
F
U
t
V
D
V UV
F k V
t
= +
= +
+
where F, k, D
u
, and D
v
. are parameters given in
[Pearson
1993]
. We have implemented 2D and 3D versions of this
process, as shown in Figure 5 (2D), and Figures 1 and 9 (3D,
on color plate). We found reaction-diffusion relatively
simple to implement in our framework because we were able
to reuse our existing diffusion operator. In 2D this
simulation requires two passes per iteration, and in 3D it
requires three passes per slice. A 256x256 lattice runs at 400
iterations per second in our interactive framework, and a
128x128x32 lattice runs at 60 iterations per second.
The low precision of the GeForce 3 and 4 reduces the
variety of patterns that our implementation of the Gray-Scott
model produces. We have seen a variety of results, but much
less diversity than produced by a floating point
implementation. As with convection, this appears to be
caused by the effects of low-precision arithmetic.
Hardware Limitations
While current GPUs make a good platform for CML
simulation, they are not without problems. Some of these
problems are performance problems of the current
implementation, and may not be issues in the near future.
NVIDIA has shown in the past that slow performance can
often be alleviated via optimization of the software drivers
that accompany the GPU. Other limitations are more
fundamental.
Most of the implementation limitations that we
encountered were limitations that affected performance. We
have found glCopyTexSubImage3D(), which copies the
frame buffer to a slice of a 3D texture, to be much slower (up
to three orders of magnitude) than glCopyTexSubImage2D()
for the same amount of data. This prevented us from using
3D textures in our implementation. Once this problem is
alleviated, we expect a 3D texture implementation to be
faster and easier to implement, since it will remove the need
to bind multiple textures to sample neighbors in the third
dimension. Also, 3D textures provide hardware linear
interpolation and boundary conditions (periodic or fixed) in
all three dimensions. With our slice-based implementation,
we must interpolate and handle boundary conditions in the
third dimension in software.
The ability to render to texture will also provide a speed
improvement, as we estimate that in a complex 3D
simulation, much of the processing time is spent copying
rendered data from the frame buffer to textures (typically one
copy per pass). When using 3D textures, we will need the
ability to render to a slice of a 3D texture.
5.1 Precision
The hardware limitation that causes the most problems to
our implementation is precision. The register combiners in
the GeForce 3 and 4 perform arithmetic using nine-bit signed
fixed-point values. Without floating point, the programmer
must scale and bias values to maintain them in ranges that
maximize precision. This is not only difficult, it is subject to
arithmetic error. Some simulations (such as boiling) handle
this error well, and behave as predicted by a floating point
implementation. Others, such as our reaction-diffusion
implementation, are more sensitive to precision errors.
We have done some analysis of the error introduced by
low precision and experiments to determine how much
precision is needed (For full details, see
[Harris 2002a]
). We
hypothesize that the diffusion operation is very susceptible to
Figure 6: High-precision fragment computations in near
future graphics hardware will enable accurate simulation of
reaction-diffusion at hundreds of iterations per second.
115
Harris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware
The Eurographics Association 2002.
roundoff error, because in our experiments in CMLlab,
iterated application of a diffusion operator never fully
diffuses its input. We derive the error induced by each
application of diffusion (in 2D) to a node (i,j) as
,
3
(3
)
4
d
i j
d
x
+
+
,
where d is the diffusion coefficient, x
i,j
is the value at node (i,
j), and
is the amount of roundoff error in each arithmetic
operation. Since d and x
i,j
are in the range [0,1], this error is
bounded above by
4.75
d
. With 8 bits of precision,
is
at most 2
-9
. This error is fairly large, meaning that a
simulation that is sensitive to small numbers will quickly
diverge.
In an attempt to better understand the precision needs of
our more sensitive simulations, we implemented a software
version of our reaction-diffusion simulation with adjustable
fixed-point precision. Through experimentation, we have
found that with 14 or more bits of fixed-point precision, the
behavior of this simulation is visually very similar to our
single-precision floating-point implementation. Like the
floating-point version, a diverse variety of patterns grow,
evolve, and sometimes develop unstable formations that
never cease to change. Figure 6 shows a variety of patterns
generated with this 14-bit fixed-point simulation.
Graphics hardware manufacturers are quickly moving
toward higher-quality pixels. This goal, along with
increasing programmability, makes high-precision
computation essential. Higher precision, including floating-point
fragment values, will become a standard feature of
GPUs in the near future
[Spitzer 2002]
. With the increasing
precision and programmability of GPUs, we believe that
CML methods for simulating natural phenomena using
graphics hardware will become very useful.
Conclusions and Future Work
In this paper, we have described a method for simulating a
variety of dynamic phenomena using graphics hardware. We
presented the coupled map lattice as a simple and flexible
simulation technique, and showed how CML operations map
to computer graphics hardware operations. We have
described common CML operations and how they can be
implemented on programmable GPUs.
Our hardware CML implementation shows a substantial
speed increase (up to 25 times on a GeForce 4) over the same
simulations implemented to run on a Pentium 4 CPU.
However, this comparison (and the speedup numbers in
Table 1) should be taken with a grain of salt. While our
CPU-based CML simulator is an efficient, straightforward
implementation that obeys common cache coherence
principles, it is not highly optimized, and could be
accelerated by using vectorized CPU instructions. Our
graphics hardware implementation is not highly optimized
either. We sacrifice optimal speed for flexibility. The CPU
version is also written to use single precision floats, while
the GPU version uses fixed-point numbers with much less
precision. Nevertheless, we feel that it would be difficult, if
not impossible, to achieve a 25x speedup over our current
CPU implementation by optimizing the code and using lower
precision numbers. A more careful comparison and
optimized simulations on both platforms would be useful in
the future.
"CMLlab", our flexible framework for building CML
models, allows a user to experiment with simulations
running on graphics processors. We have described various
2D and 3D simulations that we have implemented in this
framework. We have also integrated our CML framework
with a 3D game engine to demonstrate the use of 3D CML
models in interactive scenes and virtual environments. In the
future, we would like to add more flexibility to CMLlab.
Users currently cannot define new, custom operations
without writing C++ code. It would be possible, however, to
provide generic, scriptable operators, since the user
microcode that runs on the GPU can be dynamically loaded.
We have described the problems we encountered in
implementing CML in graphics hardware, such as limited
precision and 3D texturing performance problems. We
believe that these problems will be alleviated in near future
generations of graphics hardware. With the continued
addition of more texture units, memory, precision, and more
flexible programmability, graphics hardware will become an
even more powerful platform for visual simulation. Some
relatively simple extensions to current graphics hardware and
APIs would benefit CML and PDE simulation. For example,
the ability to render to 3D textures could simplify and
accelerate each pass of our simulations. One avenue for
future research is to increase parallelization of simulations on
graphics hardware. Currently, it is difficult to add multiple
GPUs to a single computer because PCs have a single AGP
port. If future PC hardware adds support for multiple GPUs,
powerful multiprocessor machines could be built with these
inexpensive processors.
We plan to continue exploring the use of CML on current
and future generations of graphics hardware. We are
interested in porting our system to ATI Radeon hardware.
The Radeon 8500 can sample more textures per pass and has
more programmable texture addressing than GeForce 3,
which could add power to CML simulations. Also, our
current framework relies mostly on the power of the
fragment processing pipeline, and uses none of the power
available in the programmable vertex engine. We could
greatly increase the complexity of simulations by taking
advantage of this. Currently, this would incur additional cost
for feedback of the output of the fragment pipeline (through
the main memory) and back into the vertex pipeline, but
depending on the application, it may be worth the expense.
GPU manufacturers could improve the performance of this
feedback by allowing textures in memory to be interpreted as
vertex meshes for processing by the vertex engine, thus
avoiding unneccessary transfers back to the host.
We hope to implement the cloud simulation described by
[Miyazaki, et al. 2001]
in the near future, as well as other
116
Harris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware
The Eurographics Association 2002.
dynamic phenomena. Also, since the boiling simulation of
[Yanagita 1992]
models only temperature, and disregards
surface tension, the bubbles are not round. We are interested
in extending this simulation to improve its realism. We plan
to continue exploring the use of computer graphics hardware
for general computation. As an example, the anisotropic
diffusion that can be performed on a GPU may be useful for
image-processing and computer vision applications.
Acknowledgements
The authors would like to thank Steve Molnar, John Spitzer
and the NVIDIA Developer Relations team for answering
many questions. This work was supported in part by
NVIDIA Corporation, US NIH National Center for Research
Resources Grant Number P41 RR 02170, US Office of
Naval Research N00014-01-1-0061, US Department of
Energy ASCI program, and National Science Foundation
grants ACR-9876914 and IIS-0121293.
A Implementation
of
Diffusion
On GeForce 3 hardware, the diffusion operation can be
implemented more efficiently than the Laplacian operator
itself. To do so, we rewrite Equation (1) as
'
,
,
1,
1,
, 1
, 1
4
,
( , )
1
(1
)
(
)
4
1
[(1
)
],
4
k
d
i j
d
i j
i
j
i
j
i j
i j
d
i j
d n i j
k
c
T
c T
T
T
T
T
c T
c T
+
+
=
= +
+
+
+
=
+
where n
k
(x,y) represents the kth nearest neighbor of (x, y). In
this form, we see that the diffusion operator is the average of
four weighted sums of the center texel, T
i,j
and its four
nearest neighbor texels. These weighted sums are actually
linear interpolation computations, with c
d
as the parameter of
interpolation. This means that we can implement the
diffusion operation described by Equation 3 by enabling
linear texture filtering, and using texture coordinate offsets
of c
d
w
,
where w is the width of a texel as described in
Section 3.5.
References
[Bohn 1998] Bohn, C.-A. Kohonen Feature Mapping
Through Graphics Hardware. In Proceedings of 3rd Int.
Conference on Computational Intelligence and
Neurosciences 1998. 1998.
[Cabral, et al. 1994] Cabral, B., Cam, N. and Foran, J.
Accelerated Volume Rendering and Tomographic
Reconstruction Using Texture Mapping Hardware. In
Proceedings of Symposium on Volume Visualization 1994,
91-98. 1994.
[Carr, et al. 2002] Carr, N.A., Hall, J.D. and Hart, J.C. The
Ray Engine. In Proceedings of SIGGRAPH / Eurographics
Workshop on Graphics Hardware 2002. 2002.
[Dobashi, et al. 2000] Dobashi, Y., Kaneda, K., Yamashita,
H., Okita, T. and Nishita, T. A Simple, Efficient Method for
Realistic Animation of Clouds. In Proceedings of
SIGGRAPH 2000, ACM Press / ACM SIGGRAPH, 19-28.
2000.
[Eberly 2001] Eberly, D.H. 3D Game Engine Design.
Morgan Kaufmann Publishers. 2001.
[England 1978] England, J.N. A system for interactive
modeling of physical curved surface objects. In Proceedings
of SIGGRAPH 78 1978, 336-340. 1978.
[Eyles, et al. 1997] Eyles, J., Molnar, S., Poulton, J., Greer,
T. and Lastra, A. PixelFlow: The Realization. In Proceedings
of 1997 SIGGRAPH / Eurographics Workshop on Graphics
Hardware 1997, ACM Press, 57-68. 1997.
[Fedkiw, et al. 2001] Fedkiw, R., Stam, J. and Jensen, H.W.
Visual Simulation of Smoke. In Proceedings of SIGGRAPH
2001, ACM Press / ACM SIGGRAPH. 2001.
[Foster and Metaxas 1997] Foster, N. and Metaxas, D.
Modeling the Motion of a Hot, Turbulent Gas. In
Proceedings of SIGGRAPH 1997, ACM Press / ACM
SIGGRAPH, 181-188. 1997.
[Harris 2002a] Harris, M.J. Analysis of Error in a CML
Diffusion Operation. University of North Carolina Technical
Report TR02-015.
http://www.cs.unc.edu/~harrism/cml/dl/HarrisTR02-015.pdf
. 2002a.
[Harris 2002b] Harris, M.J. Implementation of a CML
Boiling Simulation using Graphics Hardware. University of
North Carolina Technical Report TR02-016.
http://www.cs.unc.edu/~harrism/cml/dl/HarrisTR02-016.pdf
. 2002b.
[Heidrich, et al. 1999] Heidrich, W., Westermann, R., Seidel,
H.-P. and Ertl, T. Applications of Pixel Textures in
Visualization and Realistic Image Synthesis. In Proceedings
of ACM Symposium on Interactive 3D Graphics 1999. 1999.
[Hoff, et al. 1999] Hoff, K.E.I., Culver, T., Keyser, J., Lin,
M. and Manocha, D. Fast Computation of Generalized
Voronoi Diagrams Using Graphics Hardware. In
Proceedings of SIGGRAPH 1999, ACM / ACM Press, 277-286
. 1999.
[Jobard, et al. 2001] Jobard, B., Erlebacher, G. and Hussaini,
M.Y. Lagrangian-Eulerian Advection for Unsteady Flow
Visualization. In Proceedings of IEEE Visualization 2001.
2001.
[Kaneko 1993] Kaneko, K. (ed.), Theory and applications of
coupled map lattices. Wiley, 1993.
[Kapral 1993] Kapral, R. Chemical Waves and Coupled Map
Lattices. in Kaneko, K. ed. Theory and Applications of
Coupled Map Lattices, Wiley, 135-168. 1993.
[Kedem and Ishihara 1999] Kedem, G. and Ishihara, Y.
Brute Force Attack on UNIX Passwords with SIMD
Computer. In Proceedings of The 8th USENIX Security
Symposium 1999. 1999.
[Lee, et al. 1993] Lee, K.J., McCormick, W.D., Ouyang, Q.
and Swinn, H.L. Pattern Formation by Interacting Chemical
Fronts. Science, 261. 192-194. 1993.
[Lengyel, et al. 1990] Lengyel, J., Reichert, M., Donald, B.R.
and Greenberg, D.P. Real-Time Robot Motion Planning
Using Rasterizing Computer Graphics Hardware. In
Proceedings of SIGGRAPH 1990, 327-335. 1990.
117
Harris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware
The Eurographics Association 2002.
[Lindholm, et al. 2001] Lindholm, E., Kilgard, M. and
Moreton, H. A User Programmable Vertex Engine. In
Proceedings of SIGGRAPH 2001, ACM Press / ACM
SIGGRAPH, 149-158. 2001.
[Miyazaki, et al. 2001] Miyazaki, R., Yoshida, S., Dobashi,
Y. and Nishita, T. A Method for Modeling Clouds Based on
Atmospheric Fluid Dynamics. In Proceedings of The Ninth
Pacific Conference on Computer Graphics and Applications
2001, IEEE Computer Society Press, 363-372. 2001.
[Nagel and Raschke 1992] Nagel, K. and Raschke, E. Self-organizing
criticality in cloud formation? Physica A, 182.
519-531. 1992.
[Nishimori and Ouchi 1993] Nishimori, H. and Ouchi, N.
Formation of Ripple Patterns and Dunes by Wind-Blown
Sand. Physical Review Letters, 71 1. 197-200. 1993.
[NVIDIA 2002] NVIDIA. NVIDIA OpenGL Extension
Specifications.
http://developer.nvidia.com/view.asp?IO=nvidia_opengl_specs
. 2002.
[NVIDIA 2001a] NVIDIA. NVIDIA OpenGL Game Of Life
Demo.
http://developer.nvidia.com/view.asp?IO=ogl_gameoflife
.
2001a.
[NVIDIA 2001b] NVIDIA. NVIDIA Procedural Texture
Physics Demo.
http://developer.nvidia.com/view.asp?IO=ogl_dynamic_bumpreflection
.
2001b.
[Olano and Lastra 1998] Olano, M. and Lastra, A. A Shading
Language on Graphics Hardware: The PixelFlow Shading
System. In Proceedings of SIGGRAPH 1998, ACM / ACM
Press, 159-168. 1998.
[Pearson 1993] Pearson, J.E. Complex Patterns in a Simple
System. Science, 261. 189-192. 1993.
[Peercy, et al. 2000] Peercy, M.S., Olano, M., Airey, J. and
Ungar, P.J. Interactive Multi-Pass Programmable Shading. In
Proceedings of SIGGRAPH 2000, ACM Press / ACM
SIGGRAPH, 425-432. 2000.
[Potmesil and Hoffert 1989] Potmesil, M. and Hoffert, E.M.
The Pixel Machine: A Parallel Image Computer. In
Proceedings of SIGGRAPH 89 1989, ACM, 69-78. 1989.
[Proudfoot, et al. 2001] Proudfoot, K., Mark, W.R.,
Tzvetkov, S. and Hanrahan, P. A Real-Time Procedural
Shading System for Programmable Graphics Hardware. In
Proceedings of SIGGRAPH 2001, ACM Press / ACM
SIGGRAPH, 159-170. 2001.
[Purcell, et al. 2002] Purcell, T.J., Buck, I., Mark, W.R. and
Hanrahan, P. Ray Tracing on Programmable Graphics
Hardware. In Proceedings of SIGGRAPH 2002, ACM /
ACM Press. 2002.
[Qian, et al. 1996] Qian, Y.H., Succi, S. and Orszag, S.A.
Recent Advances in Lattice Boltzmann Computing. in
Stauffer, D. ed. Annual Reviews of Computational Physics
III, World Scientific, 195-242. 1996.
[Rhoades, et al. 1992] Rhoades, J., Turk, G., Bell, A., State,
A., Neumann, U. and Varshney, A. Real-Time Procedural
Textures. In Proceedings of Symposium on Interactive 3D
Graphics 1992, ACM / ACM Press, 95-100. 1992.
[Spitzer 2002] Spitzer, J. Shading and Game Development
(Presentation on NVIDIA Technology). IBM EDGE
Workshop. 2002.
[Stam 1999] Stam, J. Stable Fluids. In Proceedings of
SIGGRAPH 1999, ACM Press / ACM SIGGRAPH, 121-128
. 1999.
[Toffoli and Margolus 1987] Toffoli, T. and Margolus, N.
Cellular Automata Machines. The MIT Press. 1987.
[Trendall and Steward 2000] Trendall, C. and Steward, A.J.
General Calculations using Graphics Hardware, with
Applications to Interactive Caustics. In Proceedings of
Eurogaphics Workshop on Rendering 2000, Springer, 287-298
. 2000.
[Turing 1952] Turing, A.M. The chemical basis of
morphogenesis. Transactions of the Royal Society of London,
B237. 37-72. 1952.
[Turk 1991] Turk, G. Generating Textures on Arbitrary
Surfaces Using Reaction-Diffusion. In Proceedings of
SIGGRAPH 1991, ACM Press / ACM SIGGRAPH, 289-298
. 1991.
[von Neumann 1966] von Neumann, J. Theory of Self-Reproducing
Automata. University of Illinois Press. 1966.
[Weiskopf, et al. 2001] Weiskopf, D., Hopf, M. and Ertl, T.
Hardware-Accelerated Visualization of Time-Varying 2D
and 3D Vector Fields by Texture Advection via
Programmable Per-Pixel Operations. In Proceedings of
Vision, Modeling, and Visualization 2001, 439-446. 2001.
[Weisstein 1999] Weisstein, E.W. CRC Concise
Encyclopedia of Mathematics. CRC Press. 1999.
[Witkin and Kass 1991] Witkin, A. and Kass, M. Reaction-Diffusion
Textures. In Proceedings of SIGGRAPH 1991,
ACM Press / ACM SIGGRAPH, 299-308. 1991.
[Wolfram 1984] Wolfram, S. Cellular automata as models of
complexity. Nature, 311. 419-424. 1984.
[Yanagita 1992] Yanagita, T. Phenomenology of boiling: A
coupled map lattice model. Chaos, 2 3. 343-350. 1992.
[Yanagita and Kaneko 1993] Yanagita, T. and Kaneko, K.
Coupled map lattice model for convection. Physics Letters A,
175. 415-420. 1993.
[Yanagita and Kaneko 1997] Yanagita, T. and Kaneko, K.
Modeling and Characterization of Cloud Dynamics. Physical
Review Letters, 78 22. 4297-4300. 1997.
118
Harris, Coombe, Scheuermann, and Lastra / Simulation on Graphics Hardware
The Eurographics Association 2002.
Figure 7: A CML boiling simulation running in an
interactive 3D environment (the steam is a particle
system).
Figure 9: A sequence from our 3D version of the Gray-Scott reaction-diffusion model.
Figure 8: A CML convection simulation. The left panel
shows temperature; the right panel shows 2D velocity
encoded in the blue and green color channels
.
160 | Coupled Map Lattice;Visual Simulation;Reaction-Diffusion;dynamic phenomena;Multipass Rendering;simulation;CML;graphic hardware;Graphics Hardware |
149 | Physiological Measures of Presence in Stressful Virtual Environments | A common measure of the quality or effectiveness of a virtual environment (VE) is the amount of presence it evokes in users. Presence is often defined as the sense of being there in a VE. There has been much debate about the best way to measure presence, and presence researchers need, and have sought, a measure that is reliable, valid, sensitive, and objective. We hypothesized that to the degree that a VE seems real, it would evoke physiological responses similar to those evoked by the corresponding real environment, and that greater presence would evoke a greater response. To examine this, we conducted three experiments, the results of which support the use of physiological reaction as a reliable, valid, sensitive, and objective presence measure. The experiments compared participants' physiological reactions to a non-threatening virtual room and their reactions to a stressful virtual height situation. We found that change in heart rate satisfied our requirements for a measure of presence, change in skin conductance did to a lesser extent, and that change in skin temperature did not. Moreover, the results showed that inclusion of a passive haptic element in the VE significantly increased presence and that for presence evoked: 30FPS > 20FPS > 15FPS. | Introduction
Virtual environments (VEs) are the most sophisticated human-computer
interfaces yet developed. The effectiveness of a VE
might be defined in terms of enhancement of task performance,
effectiveness for training, improvement of data comprehension,
etc. A common metric of VE quality is the degree to which the VE
creates in the user the subjective illusion of presence a sense of
being in the virtual, as opposed to the real, environment. Since
presence is a subjective condition, it has most commonly been
measured by self-reporting, either during the VE experience or
immediately afterwards by questionnaires. There has been
vigorous debate as to how to best measure presence [Barfield et al.
1995; Ellis 1996; Freeman et al. 1998; IJsselsteijn and de Ridder
1998; Lombard and Ditton 1997; Regenbrecht and Schubert 1997;
Schubert et al. 1999; Sheridan 1996; Slater 1999; Witmer and
Singer 1998].
In order to study a VE's effectiveness in evoking presence,
researchers need a well-designed and verified measure of the
phenomena. This paper reports our evaluation of three
physiological measures heart rate, skin conductance, and skin
temperature as alternate operational measures of presence in
stressful VEs. Since the concept and idea of measuring presence
are heavily debated, finding a measure that could find wide
acceptance would be ideal. In that hope, we investigated the
reliability, validity, sensitivity, and objectivity of each
physiological measure.
Figure 1. Side view of the virtual environment. Subjects start
in the Training Room and later enter the Pit Room.
1.2. Physiological Reaction as a Surrogate
Measure of Presence
As VE system and technology designers, we have sought for a
presence measure that is
Reliable produces repeatable results, both from trial to trial on
the same subject and across subjects,
Valid measures subjective presence, or at least correlates with
well-established subjective presence measures,
Sensitive discriminates among multiple levels of presence, and
Objective is well shielded from both subject and experimenter
bias.
We hypothesize that to the degree that a VE seems real, it will
evoke physiological responses similar to those evoked by the
corresponding real environment, and that greater presence will
evoke a greater response. If so, these responses can serve as
objective surrogate measures of subjective presence.
Of the three physiological measures in our studies, Change
in Heart Rate performs best. It consistently differentiates among
conditions with more sensitivity and more statistical power than
the other physiological measures, and more than most of the self-reported
measures. It also best correlates with the reported
measures.
Figure 2. View of the 20' pit from the wooden ledge.
Change in Skin Temperature is less sensitive, less powerful, and
slower responding than Change in Heart Rate, although its
response curves are similar. It also correlates with reported
measures. Our results and the literature on skin temperature
reactions suggest that Change in Skin Temperature would
differentiate among conditions better if the exposures to the
stimulus were at least 2 minutes [McMurray 1999; Slonim 1974].
Ours averaged 1.5 minutes in each experiment.
Change in
Skin Conductance Level yielded significant
differentiation in some experiments but was not so consistent as
Change in Heart Rate. More investigation is needed to establish
whether it can reliably differentiate among multiple levels of
presence.
Since Change in Heart Rate best followed the hypotheses, the
remainder of this paper will treat chiefly the results for it. For a
full account of all measures, please see [Meehan 2001].
1.3. Our Environment and Measures
We use a derivative of the compelling VE reported by Usoh et al.
[1999]. Figure 1 shows the environment: a Training Room, quite
ordinary, and an adjacent Pit Room, with an unguarded hole in the
floor leading to a room 20 ft. below. On the upper level the Pit
Room is bordered with a 2-foot wide walkway. The 18x32 foot, 2-room
virtual space fits entirely within the working space of our
lab's wide-area ceiling tracker. Users, equipped with a head-tracked
stereoscopic head-mounted display, practice walking about
and picking up and placing objects in the Training Room. Then
they are told to carry an object into the next room and place it at a
designated spot. The door opens, and they walk through it to an
unexpected hazard, a virtual drop of 20 ft. if they move off the
walkway. Below is a furnished Living Room (Figure 2).
Users report feeling frightened. Some report vertigo. Some will
not walk out on the ledge and ask to stop the experiment or demo
at the doorway. A few boldly walk out over the hole, as if there
were a solid glass floor. For most of us, doing that, if we can,
requires conscious mustering of will.
This environment, with its ability to elicit a fear reaction in users,
enables investigation of physiological reaction as a measure of
presence. If so strong a stress-inducing VE does not produce
significant physiological reactions, a less stressful VE won't. This
investigation is a first step. Follow-on research should investigate
whether less stressful environments also elicit statistically
significant physiological reactions.
This remainder section will discuss the physiological measures we
tested and the reported measures we used to evaluate validity.
1.3.1. The Physiological Measures
As stated above, we investigated three physiological metrics that
measure stress in real environments [Andreassi 1995;
Guyton 1986; Weiderhold et al. 1998]:
Change in heart rate ( Heart Rate). The heart beats faster in stress.
Change in skin conductance ( Skin Conductance Level). The
skin of the palm sweats more in stress, independently of
temperature, so its conductance rises.
Change in skin temperature ( Skin Temperature). Circulation
slows in the extremities in stress, causing skin temperature to drop.
Each of these measures was constructed to increase when the
physiological reaction to the Pit Room was greater.
Heart Rate = mean HR
Pit Room
mean HR
Training Room.
Skin Conductance = mean SC
Pit Room
mean SC
Training Room
Skin Temperature = mean ST
Training Room
mean ST
Pit Room
We first measured heart rate with a convenient finger-mounted
blood-pulse plethysmograph, but the noise generated by the sensor
moving on the finger made the signal unstable and unusable. We
then went to more cumbersome chest-attached three-electrode
electrocardiography (ECG). This gave a good signal. Skin
conductivity and skin temperature were successfully measured on
the fingers. Once connected, users reported forgetting about the
physiological sensors they did not cause breaks in presence
during the experiments. Figure 3 shows a subject wearing the
physiological monitoring equipment.
1.3.2. The Reported Measures
Reported Presence. We used the University College London
(UCL) questionnaire [Slater et al. 1995; Usoh et al. 1999]. The
UCL questionnaire contains seven questions that measure presence
(Reported Presence), three questions that measure behavioral
presence (Reported Behavioral Presence) does the user act as if
in a similar real environment and three that measure ease of
locomotion (Ease of Locomotion). Responses for each question
are on a scale of 1 to 7. Reported Ease of Locomotion was
administered for consistency with earlier experiments, but we do
not report on it in this paper.
Figure 3. Subject wearing HMD and physiological monitoring
equipment in the "Pit Room".
646
Even though each question is rated on a scale of 1-7, Slater et al.
use it only to yield a High-Presence/ Low-Presence result. A
judgment must be made as to the high-low threshold. Slater et al.
have investigated the use of 6 and 7 as "high" responses [
6] and
the use of 5, 6, and 7 as "high" responses [
5] as well as other
constructions: addition of raw scores, and a combination based on
principal-components analysis. They have found that [
6] better
followed conditions [Slater et al. 1994], and, therefore they chose
that construction. We found that the [
5] construction better
follows presence conditions but has lower correlations with our
physiological measures. Therefore, in order to best follow the
original intention of the measures, irrespective of the lower
correlations with our measures, we choose the [
5] construction.
On the study for which data is published, Slater's subjects rarely
(<10%) reported "5" values; over 25% of our subjects did. One
explanation for this difference in subjects' reporting may be that
university students today expect more technically of a VE than
they did several years ago and, therefore, are more likely to report
lower values (5s) even for the most presence-inducing VEs.
Reported Behavioral Presence. Three questions asked subjects if
they behaved as if present when in the VE. The count of high
scores [
5] on these questions made up the Reported Behavioral
Presence measure.
Multiple
Exposures
Passive
Haptics
Frame
Rate
Presence in
VEs
Does
presence
decrease
with exposures?
Passive
Haptics
increase
presence?
Higher Frame
Rate increases
presence?
Reliability
of Measures
Are repeated
measures
highly
correlated?
Regardless of condition, will the
Pit Room evoke similar
physiological reactions on every
exposure?
Validity
Do results correlate with reported measures?
Sensitivity
of Measures
Do measures
detect any
effect?
Do measures
distinguish
between 2
conditions?
Do measures
distinguish
among 4
conditions?
Table 1. Questions investigated in each study.
1.4. Methods and Procedures
1.4.1. Experimental procedures.
We conducted three experiments: Effects of Multiple Exposures on
Presence (Multiple Exposures), Effects of Passive Haptics on
Presence (Passive Haptics), and Effects of Frame Rate on Presence
(Frame Rate). Each of the three studies investigated some
interesting aspect of VEs and the properties of the physiological
measures themselves. Table 1 summarizes all the questions
studied. For all studies we excluded subjects who had previously
experienced VEs more than three times. The experiments were
also limited to subjects who were ambulatory, could use stereopsis
for depth perception, had no history of epilepsy or seizure, were
not overly prone to motion sickness, were in their usual state of
good physical fitness at the time of the experiment, and were
comfortable with the equipment.
Multiple Exposures: 10 subjects (average age 24.4;
= 8.2; 7
female, 3 male) were trained to pick up books and move about in
the Training Room at which time a physiological baseline was
taken. Subjects then carried a virtual book from the Training
Room and placed it on a virtual chair on the far side of the Pit
Room. After that, they returned to the Training Room. The
subjects performed this task three times per day on four separate
days. We investigated whether the presence-evoking power of a
VE declines with multiple exposures. Heart Rate was not
successfully measured in this study due to problems with the
sensor.
Figure 4. Subject in slippers with toes over 1.5-inch ledge.
Passive Haptics: 52 subjects (average age 21.4;
= 4.3; 16
female, 36 male) reported on two days. Subjects experienced the
VE with the 1.5-inch wooden ledge on one of their two days. The
1.5-inch height was selected so that the edge-probing foot did not
normally contact the real laboratory floor where the virtual pit was
seen. On their other day, subjects experienced the VE without the
ledge. Subjects were counterbalanced as to the order of
presentation of the physical ledge. Subjects performed all
exposures to the VE wearing only thin sock-like slippers (Figure
4). The task was the same as in the Multiple Exposures study
except subjects were instructed to walk to the edge of the wooden
platform, place their toes over the edge, and count to ten before
they proceeded to the chair on the far side of the room to drop the
book. We investigated whether the 1.5-inch wooden ledge
increased the presence-evoking power of the VE.
Frame Rate: 33 participants (average age 22.3;
= 3.6; 8 female,
25 male) entered the VE four times on one day and were presented
the same VE with a different frame rate each time. The four frame
rates were 10, 15, 20, and 30 frames-per-second (FPS). Subjects
were counterbalanced as to the order of presentation of the four
All exposures
First Exposure Only (Between Subjects)
Study
Variable
Mean
P
% > 0
N
Mean
P
% > 0
N
Skin Conductance
2.3
mSiemens
< .001
99% 112
2.9
mSiemens
.002
100% 9
Multiple
Exposures
Skin Temperature
0.6
o
F
< .001
77% 94
1.2
o
F
.015
100% 7
Heart Rate
6.3
BPM
< .001
89% 92
6.2
BPM
< .001
85% 46
Skin Conductance
4.8
mSiemens
< .001
100% 100
4.7
mSiemens
< .001
100% 50
Passive Haptics
Skin Temperature
1.1
o
F
< .001
90% 98
1.1
o
F
< .001
94% 49
Heart Rate
6. 3
BPM
< .001
91% 132
8.1
BPM
< .001
91% 33
Skin Conductance
2.0
mSiemens
< .001
87% 132
2.6
mSiemens
< .001
97% 33
Frame Rate
Skin Temperature
0.8
o
F
< .001
100% 132
1.0
o
F
< .001
100% 33
Table 2. Summary of means and significance of differences (
)
between Training Room and Pit Room. The mean, P-value for the
one-sample t-test, percentage of times the measure was > 0, and number of samples are shown. The left side shows the means and
significances of all exposures. The right side shows these for only subjects' first exposures. The greater mean is shown in bold.
647
frame rates. Subjects were trained to pick up and drop blocks in
the Training Room and then carried a red block to the Pit Room
and dropped it on a red X-target on the floor of the Living Room, a
procedural improvement that forced subjects to look down into the
pit. They then plucked from the air two other colored blocks
floating in the Pit Room and dropped each on the correspondingly-colored
Xs on the floor of the Living Room. The X-targets and the
green and blue blocks are visible in Figures 1 and 2. In this study,
we investigated the effect of several different frame rates on
presence and hypothesized that the higher the frame rate, the
greater the presence evoked.
In all three studies, the amount of physical activity (walking,
manipulating objects) was approximately balanced between the Pit
and Training Rooms. This lessened any difference between the
two rooms in physiological reaction due to physical activity.
1.4.2. Statistics
In this paper, we define statistical significance at the 5% level, i.e.
P < 0.050. Findings significant at the 5% level are discussed as
"demonstrated" or "shown". To find the best statistical model for
each measure, we used Stepwise Selection and Elimination as
described by Kleinbaum et al. [1998]. As they suggest, to account
better (statistically) for variation in the dependent variable (e.g.,
Heart Rate), we included all variables in the statistical models
that were significant at the P < 0.100 level.
The analysis of differences in physiological reaction between the
Pit Room and the Training Room for all studies (Table 2) was
performed with a One-Sample T-Test. The correlations among
measures were performed using the Bivariate Pearson Correlation.
We analyzed order effects and the effects on presence of passive
haptics and frame rate with the Univariate General Linear Model,
using the repeated measure technique described in the SAS 6.0
Manual [SAS 1990]. This technique allows one to investigate the
effect of the condition while taking into account inter-subject
variation, order effects, and the effects of factors that change from
exposure to exposure such as loss of balance on the 1.5-inch ledge.
Section 2 details our evaluation of physiological measures as a
surrogate for presence. In Section 3, we analyze physiological
reactions as between-subject measures. In Section 4, we
summarize the results as they pertain to interesting aspects of VEs.
Physiological measures of presence
In this section, we discuss the reliability, validity, sensitivity, and
objectivity of the physiological measures.
2.1. Reliability
Reliability is "the extent to which the same test applied on different
occasions ... yields the same result" [Sutherland 1996].
Specifically, we wanted to know whether the virtual environment
would consistently evoke similar physiological reactions as the
subject entered and remained in the Pit Room on several occasions.
Inconsistency could manifest itself as either a systematic increase
or decrease in reactions or in uncorrelated measures for repeated
exposure to the same VE. In the Multiple Exposures study the
condition was the same each time, so this was our purest measure
of reliability. We also hypothesized that in the Passive Haptics and
Frame Rate studies, regardless of condition, that the Pit Room
would also evoke similar physiological reactions on every
exposure. We hypothesized that simply being exposed to the Pit
Room would cause a greater physiological reaction than the
difference between "high" and "low" presence conditions.
Therefore, all three studies provide information on reliability.
As we hypothesized, the environment consistently evoked
physiological reactions over multiple exposures to the Pit Room.
When analyzing the data from all exposures, we found there were
significant physiological reactions to the Pit Room: heart rate and
skin conductance were significantly higher and skin temperature
was significantly lower in the Pit Room in all three studies. Heart
rate was higher in the Pit Room for 90% of the exposures to the
VE, skin conductance was higher for nearly 95%, and skin
temperature was lower for 90%. Table 2 shows the mean
difference, t-test, percentage of occurrences where the measure was
above zero, and the total count for each physiological measure for
each study. It shows results both for all exposures taken together,
which is the approach discussed for most of the paper, and for
analysis of the first exposure only, which we discuss in Section 4.
We also wanted to know whether the physiological reactions to the
environment would diminish over multiple exposures. Since our
hypotheses relied on presence in the VE evoking a stress reaction
over multiple exposures (2-12 exposures), we wanted to know
whether physiological reactions to the VE would drop to zero or
become unusably small due to habituation. In fact,
Skin
Temperature, Reported Presence, Reported Behavioral Presence,
and
Heart Rate each decreased with multiple exposures in every
study (although this effect was not always statistically significant),
and
Skin Conductance decreased in all but one study. None
decreased to zero, though, even after twelve exposures to the VE.
Table 3 shows the significant order effects.
A decrease in physiological reaction over multiple exposures
would not necessarily weaken validity, since the literature shows
that habituation diminishes the stress reactions to real heights and
other stressors [Abelson and Curtis, 1989; Andreassi 1995]. Since,
however, the reported presence measures, not just the physiological
stress measures, decrease over multiple exposures, the decreases
may not be due to habituation to the stressor; there may also be, as
Heeter hypothesized, a decrease in a VE's ability to evoke
presence as novelty wears off [Heeter 1992].
Orienting Effect. In general, each measure decreased after the
first exposure. Moreover, for each measure except
Heart Rate,
there was a significant decrease after the first exposure in at least
one of the studies (see Table 3). For physiological responses, this
is called an orienting effect a higher physiological reaction when
one sees something novel [Andreassi 1995]. Though this term
traditionally refers to physiological reactions, we will also use the
term for the initial spike in the reported measures.
We attempted, with only partial success, to overcome the orienting
effect by exposing subjects to the environment once as part of their
orientation to the experimental setup and prior to the data-gathering
portion of the experiment. In the Passive Haptics and
Frame Rate studies, subjects entered the VE for approximately two
Order Effects
Heart Rate
(
BPM)
Skin Conductance
(
mSiemens)
Skin Temperature
(
o
F)
Reported Presence
(Count "high")
Reported Behavioral Presence
(Count "high")
Multiple Exposures
NA
-0.7 (1
st
)
-0.9 (1
st
)
-0.7 (1
st
)
Passive Haptics
- -
-0.8 (1
st
) -0.4
(1
st
)
Frame Rate
-1.0 (Task)
-0.8 (1
st
) -0.3
(1
st
)
-0.2 (Task)
Table 3. Significant order effects for each measure in each study. "(1
st
)" indicates a decrease after the first exposure only. "(Task)"
indicates a decrease over tasks on the same day. There was an order effect for each measure in at least one study. NA is "Not
available". Significant results are listed at the P < 0.050 level (bold) and P < 0.100 (normal text). Full details given in [Meehan 2001].
648
minutes and were shown both virtual rooms before the experiment
started. These pre-exposures reduced but did not eliminate the
orienting effects.
2.2. Validity
Validity is "the extent to which a test or experiment genuinely
measures what it purports to measure" [Sutherland 1996]. The
concept of presence has been operationalized in questionnaires so
the validity of the physiological measures can be established by
investigating how well the physiological reactions correlate with
one or more of the questionnaire-based measures of presence. We
investigated their correlations with two such measures: Reported
Presence and Reported Behavioral Presence.
Reported Presence. Of the physiological measures,
Heart Rate
correlated best with the Reported Presence. There was a
significant correlation in the Frame Rate study (corr. = 0.265,
P < 0.005) and no correlation (corr. = 0.034, P = 0.743) in the
Passive Haptics study. In the Multiple Exposures study, where
Heart Rate was not available, Skin Conductance had the highest
correlation with Reported Presence (corr. = 0.245, P < 0.010).
Reported Behavioral Presence.
Heart Rate had the highest
correlation, and a significant one, with Reported Behavioral
Presence in the Frame Rate study (corr. = 0.192, P < 0.050), and
there was no correlation between the two (corr. = 0.004, P = 0.972)
in the Passive Haptics study. In the Multiple Exposures study,
where
Heart Rate was not measured, Skin Conductance had the
highest correlation with reported behavioral presence (corr. =
0.290, P < 0.005).
The correlations of the physiological measures with the reported
measures give some support to their validities. The validity of
Heart Rate appears to be better established by its correlation with
the well-established reported measures. There was also some
support for the validity of
Skin Conductance from its correlation
with reported measures.
Following hypothesized relationships. According to Singleton,
the validation process includes "examining the theory underlying
the concept being measured," and "the more evidence that supports
the hypothesized relationships [between the measure and the
underlying concept], the greater one's confidence that a particular
operational definition is a valid measure of the concept"
[Singleton et al. 1993]. We hypothesized that presence should
increase with frame rate and with the inclusion of the 1.5-inch
wooden ledge, since each of these conditions provides increased
sensory stimulation fidelity. As presented in the next section, our
physiological measures did increase with frame rate and with
inclusion of the 1.5-inch wooden ledge. This helps validate the
physiological reactions as measures of presence.
2.3. Sensitivity and multi-level sensitivity
Sensitivity is "the likelihood that an effect, if present, will be
detected" [Lipsey 1998]. The fact that the physiological measures
reliably distinguished between subjects reaction in the Pit Room
versus the Training Room in every study assured us of at least a
minimal sensitivity. For example, heart rate increased an average
across all conditions of 6.3 beats / minute (BPM) in the Pit Room
(P < 0.001) compared to the Training Room in both the Passive
Haptics and Frame Rate studies. See Table 2 for a full account of
sensitivity of physiological measures to the difference between the
two rooms.
Acrophobic patients', when climbing to the second story of a fire
escape (with a handrail), waiting one minute, and looking down,
averaged an increase in heart rate of 13.4 BPM
[Emmelkamp and Felten 1985]. Our subjects were non-phobic,
and our height was virtual; so, we would expect, and did find, our
subjects' heart rate reactions to be lower but in the same direction.
Multi-level sensitivity. For guiding VE technological
development and for better understanding of the psychological
phenomena of VEs, we need a measure that reliably yields a higher
value as a VE is improved along some "goodness" dimension, i.e.,
is sensitive to multiple condition values. We distinguish this from
sensitivity as described above and call this multi-level sensitivity.
The Passive Haptics study provided us some evidence of the
measures' ability to discriminate between two "high presence"
situations. We have informally observed that walking into the Pit
Room causes a strong reaction in users, and this reaction seems
greater in magnitude than the differences in reaction to the Pit
Room between any two experimental conditions (e.g., with and
without the 1.5-inch wooden ledge). Therefore, we expected the
differences in reaction among the conditions to be less than the
differences between the two rooms. For example, in Passive
Haptics, we expected there to be a significant difference in the
physiological measures between the two conditions (with and
without the 1.5-inch wooden ledge), but expected it to be less than
the difference between the Training Room and Pit Room in the
"lower" presence condition (without the 1.5-inch wooden ledge).
For
Heart Rate, we did find a significant difference between the
two conditions of 2.7 BPM (P < 0.050), and it was less than the
inter-room difference for the without-ledge condition: 4.9 BPM.
See Figure 5. Figure 6 shows that the differences among the
conditions in the FR study are smaller in magnitude as compared to
the differences between the two rooms.
0
2
4
6
8
10
No Physical Ledge
With Physical Ledge
Change in beats/minute
2.7, P<0.050
4.9,
P <0.001
7.6,
P <0.001
Figure 5.
Heart Rate in Passive Haptics study.
In the Passive Haptics study, we investigated the multi-level
sensitivity of the measures by testing whether presence was
significantly higher with the 1.5-inch wooden ledge. Presence as
measured by each of
Heart Rate (2.7 BPM; P < 0.050), Skin
Conductance (0.8 mSiemens; P < 0.050), and Reported Behavioral
Presence (0.5 more "high" responses; P < 0.005) was significantly
higher with the wooden ledge. Reported Presence had a strong
trend in the same direction (0.5 more "high" responses; P = 0.060).
In the Frame Rate study, we investigated the multi-level sensitivity
of the measures by testing whether presence increased significantly
as graphic frame update rates increased. We hypothesized that
physiological reactions would increase monotonically with frame
rates of 10, 15, 20, and 30 FPS. They did not do exactly that (see
Figure 6). During the 10 FPS condition, there was an anomalous
reaction for all of the physiological measures and for Reported
Behavioral Presence. That is, at 10 FPS, subjects had higher
physiological reaction and reported more behavioral presence. We
believe that this reaction at 10 FPS was due to discomfort, added
lag, and reduced temporal fidelity while in the ostensibly
dangerous situation of walking next to a 20-foot pit
[Meehan 2001].
649
We also observed that subjects often lost their balance while trying
to inch to the edge of the wooden platform at this low frame rate;
their heart rate jumped an average of 3.5 BPM each time they lost
their balance (P < 0.050). Statistically controlling for these Loss of
Balance incidents improved the significance of the statistical model
for
Heart Rate and brought the patterns of responses closer to the
hypothesized monotonic increase in presence with frame rate but
did not completely account for the increased physiological reaction
at 10 FPS. Loss of Balance was not significant in any other model.
0
1
2
3
4
5
6
7
8
9
10
10
15
20
30
Frame Rate
Change in beats/minute
Figure 6. heart rate, after correcting for Loss of Balance,
at 10, 15, 20, and 30 frames per second.
Beyond 10 FPS, Heart Rate followed the hypothesis. After we
statistically controlled for Loss of Balance, Heart Rate
significantly increased between 15 FPS and 30 FPS (3.2 BPM;
P < 0.005) and between 15 FPS and 20 FPS (2.4 BPM; P < 0.050).
There was also a non-significant increase between 20 FPS and 30
FPS (0.7 BPM; P = 0.483) and a non-significant decrease between
10 FPS and 15 FPS (1.6 BPM; P = 0.134). Reported Presence, and
Reported Behavioral Presence also increased with frame rate from
15-20-30 FPS, but with less distinguishing power.
These findings support the multi-level sensitivity of
Heart Rate.
2.4. Objectivity
The measure properties of reliability, validity, and multi-level
sensitivity are established quantitatively. Objectivity can only be
argued logically. We argue that physiological measures are
inherently better shielded from both subject bias and experimenter
bias than are either reported measures or measures based on
behavior observations. Reported measures are liable to subject
bias the subject reporting what he believes the experimenter
wants. Post-experiment questionnaires are also vulnerable to
inaccurate recollection and to modification of impressions garnered
early in a run by impressions from later. Having subjects report
during the session, whether by voice report or by hand-held
instrument, intrudes on the very presence illusion one is trying to
measure. Behavioral measures, while not intrusive, are subject to
bias on the part of the experimenters who score the behaviors.
Physiological measures, on the other hand, are much harder for
subjects to affect, especially with no biofeedback. These measures
are not liable to experimenter bias, if instructions given to the
participants are properly limited and uniform. We read instructions
from a script in the Multiple Exposures study. We improved our
procedure in the later Passive Haptics and Frame Rate studies by
playing instructions from a compact disk player located in the real
laboratory and represented by a virtual radio in the VE.
2.5. Summary and discussion
The data presented here show that physiological reactions can be
used as reliable, valid, multi-level sensitive, and objective
measures of presence in stressful VEs. Of the physiological
measures,
Heart Rate performed the best. There was also some
support for
Skin Conductance.
Heart Rate significantly differentiated between the Training
Room and the Pit Room, and although this reaction faded over
multiple exposures, it never decreased to zero. It correlated with
the well-established reported measure, the UCL questionnaire. It
distinguished between the presence and absence of passive haptics
and among frame rates at and above 15 FPS. As we argued above,
it is objective. In total, it satisfies all of the requirements for a
reliable, valid, multi-level sensitive, and objective measure of
presence in a stressful VE.
Skin Conductance has some, but not all, of the properties we
desire in a measure of presence. In particular, it did not
differentiate among frame rates. We do not have a theory as to
why.
Although,
Heart Rate satisfied the requirements for a presence
measure for our VE, which evokes a strong reaction, it may not for
less stressful VEs. To determine whether physiological reaction
can more generally measure presence, a wider range of VEs must
be tested, including less stressful, non-stressful, and relaxing
environments. Investigation is currently under way to look at
physiological reaction in relaxing 3D Television environments
[Dillon et al. 2001].
The height reaction elicited by our VE could be due to vertigo,
fear, or other innate or learned response. The reactions are well
known in the literature and manifest as increased heart rate and
skin conductance and decreased skin temperature [Andreassi 1995;
Guyton 1986]. We hypothesized that the more present a user feels
in our stressful environment, the more physiological reaction the
user will exhibit. What causes this higher presence and higher
physiological reaction? Is it due to a more realistic flow of visual
information? Is it due to more coherence between the visual and
haptic information? Is it due to the improved visual realism? All
of these are likely to improve presence. We cannot, however,
answer these questions definitively. We can say, though, that we
have empirically shown that physiological reaction and reported
presence are both higher when we present a "higher presence" VE.
Whatever it is that causes the higher reported presence and
physiological reaction, it causes more as we improve the VE.
An additional desirable aspect of a measure is ease of use in the
experimental setting. We did not record the time needed for each
measure, but after running many subjects we can say with some
confidence that use of the physiological monitoring and of the
presence questionnaire each added approximately the same amount
of time to the experiment. It took about five minutes per exposure
to put on and take off the physiological sensors. It took about an
extra minute at the beginning and end of each set of exposures to
put on and take off the ECG sensor it was left on between
exposures on the same day. It took subjects about five minutes to
fill out the UCL Presence Questionnaire. It took some training for
experimenters to learn the proper placement of the physiological
equipment on the hands and chest of the subject thirty minutes
would probably be sufficient.
Another aspect of ease of use is the amount of difficulty
participants have with the measure and to what extent the measure,
if concurrent with an experimental task, interferes with the task.
No subjects reported difficulties with the questionnaires. Only
650
about one in ten subjects reported noticing the physiological
monitoring equipment on the hands during the VE exposures. Our
experiment, though, was designed to use only the right hand,
keeping the sensor-laden left hand free from necessary activity.
No subjects reported noticing the ECG sensor once it was attached
to the chest. In fact, many subjects reported forgetting about the
ECG electrodes when prompted to take them off at the end of the
day. There are groups investigating less cumbersome equipment,
which would probably improve ease of use, including a
physiological monitoring system that subjects wear like a shirt
[Cowings et al. 2001]. Overall, questionnaires and physiological
monitoring were both easy to use and non-intrusive.
Physiological reactions as between-subjects measures
We conducted all of the studies as within-subjects to avoid the
variance due to natural human differences. That is, each subject
experienced all of the conditions for the study in which she
participated. This allowed us to look at relative differences in
subject reaction among conditions and to overcome the differences
among subjects in reporting and physiological reaction.
The UCL questionnaire has been used successfully between-subjects
[Usoh et al. 1999]. We suspected, however, that
physiological reaction would not perform as well if taken between-subjects
. We expected the variance among subjects would mask,
at least in part, the differences in physiological reaction evoked by
the different conditions. We investigated this hypotheses by
analyzing the data using only the first task for each subject
eliminating order effects and treating the reduced data sets as
between-subjects experiments. That is, we treat each experiment
as if only the first task for each subject was run. This means that
the analysis uses only 10 data points (10 subjects first exposure
only) for the Multiple Exposures study, 52 data points for the
Passive Haptics study, and 33 data points for the Frame Rate study.
Reliability between-subjects: Physiological reaction in the Pit
Room. Even between subjects, we expected that there would be a
consistent physiological reaction to the Pit Room, since we
expected such a reaction for every exposure to the VE. We
expected the significance to be lower, however, because of the
reduced size of the data set. We found exactly that. The right half
of Table 2 shows the values of the physiological measures
averaged across conditions for the between-subjects analysis. As
compared to the full data set, the between-subjects data have lower
significance values, but subjects still have strong physiological
reactions to the Pit Room. Table 2 demonstrates that the
physiological orienting effects caused the averages for the first
exposures to be higher than for the full data set.
Validity between-subjects: Correlation with established
measures. We expected correlations with the reported measures
to be lower when taken between subjects since there were fewer
data points and individual differences in physiological reaction and
reporting would confound the correlations. This was the case. No
physiological measure correlated significantly with any reported
measure when analyzing between-subjects.
Multi-level sensitivity between-subjects: Differentiating among
presence conditions. We expected inter-subject variation in
physiological reaction to mask the differences in physiological
reactions evoked by the presence conditions (e.g., various frame
rates). Contrary to this expectation, however, we found strong
trends in the physiological measures among conditions in both the
Passive Haptics and Frame Rate studies. (The condition was not
varied in the Multiple Exposures study.)
In the Passive Haptics study, both
Heart Rate and Skin
Conductance both varied in the expected direction non-significantly
(3.3 BPM, P = 0.097; 1.0 mSiemens, P = 0.137,
respectively).
In the Frame Rate study,
Heart Rate followed hypothesized
patterns, but
Skin Conductance did not. After the anomalous
reaction at 10 FPS (as in full data set compare Figures 6 and 7),
Heart Rate differentiated among presence conditions: at 30 FPS it
was higher than at 15 FPS, and this difference was nearly
significant (7.2 BPM; P = 0.054).
Overall,
Heart Rate shows promise as a between-subjects
measure of presence. Though it did not correlate well with the
reported measures (between-subjects), it did differentiate among
the conditions with some statistical power in Passive Haptics and
Frame Rate.
Skin Conductance did not show as much promise as
a between-subjects measure. For more discussion of physiological
reactions as between-subjects measures of presence, see
[Meehan 2001].
0
2
4
6
8
10
12
14
10
15
20
30
Frame Rate
Change in beat
s/minut
e
Figure 7. Between-subjects analysis:
Heart Rate.
VE Effectiveness results
Above we described the experiments as they related to the testing
of the physiological presence measures, below we discuss each
experiment with respect to the aspect of VEs it investigated.
Effect of Multiple Exposures on Presence. As described in
Section 1.4.1, ten users go through the same VE twelve times (over
four days) in order to study whether the presence inducing power
of a VE declines, or becomes unusably small, over multiple
exposures. We did find significant decreases in each presence
measure (reported and physiological) in either this experiment or
one of the subsequent two experiments (see Table 3). However,
none of the measures decreased to zero nor did any become
unusably small. The findings support our hypothesis that all
presence measures decrease over multiple exposures to the same
VE, but not to zero.
Effect of Passive Haptics on Presence. Our hypothesis was that
supplementing a visual-aural VE with even rudimentary, low-fidelity
passive haptics cues significantly increases presence. This
experiment was only one of a set of studies investigating the
passive haptics hypothesis. The detailed design, results, and
discussion for the set are reported elsewhere [Insko 2001].
We found significant support for the hypothesis in that, with the
inclusion of the 1.5-inch ledge, presence as measured by Heart
Rate, Reported Behavioral Presence, and
Skin Conductance was
significantly higher at the P < 0.05 level. Reported Presence also
had a strong trend (P < 0.10) in the same direction.
651
Effect of Frame Rate on Presence. Our hypothesis was that as
frame rate increases from 10, 15, 20, 30 frames/second, presence
increases. For frame rates of 15 frames/second and above, the
hypothesis was largely confirmed. It was confirmed with statistical
significance for 15 to 20 FPS and 15 to 30 FPS. 20 to 30 FPS
though not statistically significant was in the same direction. 10
FPS gave anomalous results on all measures except Reported
Presence, which increased monotonically with frame rate with no
statistical significance.
Future Work
Given a compelling VE and a sensitive, quantitative presence
measure, the obvious strategy is to degrade quantitative VE quality
parameters in order to answer the questions: What makes a VE
compelling? What are the combinations of minimum system
characteristics to achieve this?
For example, we would like to study the effect of
Latency
Self-avatar fidelity
Aural localization
Visual Detail
Lighting Realism
Realistic physics in interactions with objects
Interactions with other people or agents
Then we hope to begin to establish trade-offs for presence evoked:
Is it more important to have latency below 50 ms or frame rate
above 20 FPS?
Additionally, we must eliminate the cables that tether subjects to
the monitoring, tracking, and rendering equipment. Our subjects
reported this encumbrance as the greatest cause of breaks in
presence.
Acknowledgements
We would like to thank the University of North Carolina (UNC)
Graduate School, the Link Foundation, and the National Institutes
of Health National Center for Research Resources (Grant Number
P41 RR 02170) for funding this project. We would like to thank
the members of the Effective Virtual Environments group, the
UNC Computer Science Department, and Dr. McMurray of the
UNC Applied Physiology Department. Without their hard work,
none of this research would have been possible. We would like to
thank Drs. Slater, Usoh, and Steed of the University College of
London who built much of the foundation for this work. We
would also like to thank the reviewers for their thoughtful
comments and suggestions.
References
Abelson, J. L. and G. C. Curtis (1989). Cardiac and neuroendocrine
responses to exposure therapy in height phobics. Behavior Research and
Therapy, 27(5): 561-567.
Andreassi, J. L. (1995). Psychophysiology: Human behavior and
physiological response. Hillsdale, N.J., Lawrence Erlbaum Associates.
Barfield, W., T. Sheridan, D. Zeltzer and M. Slater (1995). Presence and
performance within virtual environments. In W. Barfield and T.
Furness, Eds., Virtual environments and advanced interface design.
London, Oxford University Press.
Cowings, P., S. Jensen, D. Bergner and W. Toscano (2001). A lightweight
ambulatory physiological monitoring system. NASA Ames, California.
Dillon, C., E. Keogh, J. Freeman and J. Davidoff (2001). Presence: Is your
heart in it? 4th Int. Wkshp. on Presence, Philadelphia.
Ellis, S. R. (1996). Presence of mind: A reaction to Thomas Sheridan's
"Further musings on the psychophysics of presence". Presence:
Teleoperators and Virtual Environments, 5(2): 247-259.
Emmelkamp, P. and M. Felten (1985). The process of exposure in vivo:
cognitive and physiological changes during treatment of acrophobia.
Behavior Research and Therapy, 23(2): 219.
Freeman, J., S. E. Avons, D. Pearson, D. Harrison and N. Lodge (1998).
Behavioral realism as a metric of presence. 1st Int. Wkshp. on Presence.
Guyton, A. C. (1986). Basic characteristics of the sympathetic and
parasympathetic function. In Textbook of Medical Physiology, 688-697.
Philadelphia, W.B. Saunders Company.
Heeter, C. (1992). Being there: The subjective experience of presence.
Presence: Teleoperators and Virtual Environments, 1: 262-271.
IJsselsteijn, W. A. and H. d. Ridder (1998). Measuring temporal variations
in presence. 1st Int. Wkshp. on Presence.
B. Insko (2001). Passive haptics significantly enhance virtual
environments, Doctoral Dissertation. Computer Science. University of
North Carolina, Chapel Hill, NC, USA.
Kleinbaum, D., L. Kupper, K. Muller and A. Nizam (1998). Applied
regression analysis and other multivariate methods.
Lipsey, M. W. (1998). Design sensitivity: Statistical power for applied
experimental research. In L. Brickman and D. J. Rog, Eds., Handbook
of applied social research methods, 39-68. Thousand Oaks, California,
Sage Publications, Inc.
Lombard, M. and T. Ditton (1997). At the heart of it all: The concept of
presence. Journal of Computer Mediated Communication, 3(2).
McMurray, D. R. (1999). Director of Applied Physiology lab, University of
North Carolina. Personal Communication.
M. Meehan (2001). Physiological reaction as an objective measure of
presence in virtual environments. Doctoral Dissertation. Computer
Science. University of North Carolina, Chapel Hill, NC, USA.
Regenbrecht, H. T. and T. W. Schubert (1997). Measuring presence in
virtual environments. In Proc. of Human Computer Interface
International, San Francisco.
SAS (1990). SAS/ STAT User's Guide, Version 6, Fourth Edition. Cary,
NC, USA, SAS Institute Inc.
Schubert, T., F. Friedmann and H. Regenbrecht (1999). Embodied presence
in virtual environments. In R. Paton and I. Neilson, Eds., Visual
Representations and Interpretations. London, Springer-Verlag.
Sheridan, T. B. (1996). Further musings on the psychophysics of presence.
Presence: Teleoperators and Virtual Environments, 5(2): 241-246.
Singleton, R. A., B. C. Straits and M. M. Straits (1993). Approaches to
Social Research. New York, Oxford University Press.
Slater, M., M. Usoh and A. Steed (1994). Depth of presence in virtual
environments. Presence: Teleoperators and Virtual Environments, 3(2):
130-144.
Slater, M., M. Usoh and A. Steed (1995). Taking steps: The influence of a
walking technique on presence in virtual reality. ACM Transactions on
Computer Human Interaction (TOCHI), 2(3): 201-219.
Slater, M. (1999). Measuring Presence: A Response to the Witmer and
Singer Presence Questionnaire. Presence: Teleoperators and Virtual
Environments, 8(5): 560-565.
Slonim, N. B., Ed. (1974). Environmental Physiology. Saint Louis. The C.
V. Mosby Company.
Sutherland, S. (1996). The international dictionary of psychology. New
York, The Crossroads Publishing Company.
Usoh, M., K. Arthur, M. Whitton, R. Bastos, A. Steed, M. Slater and F.
Brooks (1999). Walking > walking-in-place > flying in virtual
environments. In Proc. of ACM SIGGRAPH 99. ACM Press/ ACM
SIGGRAPH.
Weiderhold, B. K., R. Gervirtz and M. D. Wiederhold (1998). Fear of
flying: A case report using virtual reality therapy with physiological
monitoring. CyberPsychology and Behavior, 1(2): 97-104.
Witmer, B. G. and M. J. Singer (1998). Measuring presence in virtual
environments: A presence questionnaire. Presence: Teleoperators and
Virtual Environments, 7(3): 225-240.
652
| presence;Haptics;measurement;Frame Rate;virtual environment;Presence;Physiology |
15 | A New Statistical Formula for Chinese Text Segmentation Incorporating Contextual Information | A new statistical formula for identifying 2-character words in Chinese text, called the contextual information formula, was developed empirically by performing stepwise logistic regression using a sample of sentences that had been manually segmented. Contextual information in the form of the frequency of characters that are adjacent to the bigram being processed as well as the weighted document frequency of the overlapping bigrams were found to be significant factors for predicting the probablity that the bigram constitutes a word. Local information (the number of times the bigram occurs in the document being segmented) and the position of the bigram in the sentence were not found to be useful in determining words. The contextual information formula was found to be significantly and substantially better than the mutual information formula in identifying 2-character words. The method can also be used for identifying multi-word terms in English text. | INTRODUCTION
Chinese text is different from English text in that there is no
explicit word boundary. In English text, words are separated by
spaces. Chinese text (as well as text of other Oriental languages)
is made up of ideographic characters, and a word can comprise
one, two or more such characters, without explicit indication
where one word ends and another begins.
This has implications for natural language processing and
information retrieval with Chinese text. Text processing
techniques that have been developed for Western languages deal
with words as meaningful text units and assume that words are
easy to identify. These techniques may not work well for Chinese
text without some adjustments. To apply these techniques to
Chinese text, automatic methods for identifying word boundaries
accurately have to be developed. The process of identifying word
boundaries has been referred to as text segmentation or, more
accurately, word segmentation.
Several techniques have been developed for Chinese text
segmentation. They can be divided into:
1.
statistical methods, based on statistical properties and
frequencies of characters and character strings in a corpus
(e.g. [13] and [16]).
2.
dictionary-based methods, often complemented with
grammar rules. This approach uses a dictionary of words to
identify word boundaries. Grammar rules are often used to
resolve conflicts (choose between alternative segmentations)
and to improve the segmentation (e.g. [4], [8], [19] and [20]).
3.
syntax-based methods, which integrate the word
segmentation process with syntactic parsing or part-of-speech
tagging (e.g. [1]).
4.
conceptual methods, that make use of some kind of semantic
processing to extract information and store it in a knowledge
representation scheme. Domain knowledge is used for
disambiguation (e.g. [9]).
Many researchers use a combination of methods (e.g. [14]).
The objective of this study was to empirically develop a
statistical formula for Chinese text segmentation. Researchers
have used different statistical methods in segmentation, most of
which were based on theoretical considerations or adopted from
other fields. In this study, we developed a statistical formula
empirically by performing stepwise logistic regression using a
sample of sentences that had been manually segmented. This
paper reports the new formula developed for identifying 2-character
words, and the effectiveness of this formula compared
with the mutual information formula.
This study has the following novel aspects:
The statistical formula was derived empirically using
regression analysis.
The manual segmentation was performed to identify
meaningful
words rather than simple words.
Meaningful
words include phrasal words and multi-word
terms.
In addition to the relative frequencies of bigrams and
characters often used in other studies, our study also
investigated the use of document frequencies and weighted
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, to republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
SIGIR '99 8/99 Berkley, CA USA
Copyright 1999 ACM 1-58113-096-1/99/0007 . . . $5.00
82
)
(
*
)
(
)
(
log
2
C
freq
B
freq
BC
freq
document frequencies. Weighted document frequencies are
similar to document frequencies but each document is
weighted by the square of the number of times the character
or bigram occurs in the document.
Contextual information was included in the study. To predict
whether the bigram BC in the character string
A B C D
constitutes a word, we investigated whether the
frequencies for AB, CD, A and D should be included in the
formula.
Local frequencies were included in the study. We
investigated character and bigram frequencies within the
document in which the sentence occurs (i.e. the number of
times the character or bigram appears in the document being
segmented).
We investigated whether the position of the bigram (at the
beginning of the sentence, before a punctuation mark, or after
a punctuation mark) had a significant effect.
We developed a segmentation algorithm to apply the
statistical formula to segment sentences and resolve conflicts.
In this study, our objective was to segment text into
meaningful words
rather than
simple words
. A simple
word is the smallest independent unit of a sentence that has
meaning on its own. A meaningful word can be a simple word or
a compound word comprising 2 or more simple words
depending on the context. In many cases, the meaning of a
compound word is more than just a combination of the meanings
of the constituent simple words, i.e. some meaning is lost when
the compound word is segmented into simple words.
Furthermore, some phrases are used so often that native speakers
perceive them and use them as a unit. Admittedly, there is some
subjectivity in the manual segmentation of text. But the fact that
statistical models can be developed to predict the manually
segmented words substantially better than chance indicates some
level of consistency in the manual segmentation.
The problem of identifying meaningful words is not limited to
Chinese and oriental languages. Identifying multi-word terms is
also a problem in text processing with English and other Western
languages, and researchers have used the mutual information
formula and other statistical approaches for identifying such
terms (e.g. [3], [6] and [7]).
PREVIOUS STUDIES
There are few studies using a purely statistical approach to
Chinese text segmentation. One statistical formula that has been
used by other researchers (e.g. [11] and [16]) is the mutual
information formula. Given a character string
A B C D
, the mutual information for the bigram BC is given by the
formula:
MI(BC) =
= log
2
freq(BC) log
2
freq(B) log
2
freq(C)
where freq refers to the relative frequency of the character or
bigram in the corpus (i.e. the number of times the character or
bigram occurs in the corpus divided by the number of characters
in the corpus).
Mutual information is a measure of how strongly the two
characters are associated, and can be used as a measure of how
likely the pair of characters constitutes a word. Sproat & Shih
[16] obtained recall and precision values of 94% using mutual
information to identify words. This study probably segmented
text into simple words rather than meaningful words. In our
study, text was segmented into meaningful words and we
obtained much poorer results for the mutual information
formula.
Lua [12] and Lua & Gan [13] applied information theory to the
problem of Chinese text segmentation. They calculated the
information content of characters and words using the
information entropy formula I = - log
2
P, where P is the
probability of occurrence of the character or word. If the
information content of a character string is less than the sum of
the information content of the constituent characters, then the
character string is likely to constitute a word. The formula for
calculating this
loss
of information content when a word is
formed is identical to the mutual information formula. Lua &
Gan [13] obtained an accuracy of 99% (measured in terms of the
number of errors per 100 characters).
Tung & Lee [18] also used information entropy to identify
unknown words in a corpus. However, instead of calculating the
entropy value for the character string that is hypothesized to be a
word (i.e. the candidate word), they identified all the characters
that occurred to the left of the candidate word in the corpus. For
each left character, they calculated the probability and entropy
value for that character given that it occurs to the left of the
candidate word. The same is done for the characters to the right
of the candidate word. If the sum of the entropy values for the
left characters and the sum of the entropy values for the right
characters are both high, than the candidate word is considered
likely to be a word. In other words, a character string is likely to
be a word if it has several different characters to the left and to
the right of it in the corpus, and none of the left and right
characters predominate (i.e. not strongly associated with the
character string).
Ogawa & Matsuda [15] developed a statistical method to
segment Japanese text. Instead of attempting to identify words
directly, they developed a formula to estimate the probability that
a bigram straddles a word boundary. They referred to this as the
segmentation probability. This was complemented with some
syntactic information about which class of characters could be
combined with which other class.
All the above mathematical formulas used for identifying words
and word boundaries were developed based on theoretical
considerations and not derived empirically.
Other researchers have developed statistical methods to find the
best segmentation for the whole sentence rather than focusing on
identifying individual words. Sproat et al. [17] developed a
stochastic finite state model for segmenting text. In their model,
a word dictionary is represented as a weighted finite state
transducer. Each weight represents the estimated cost of the
word (calculated using the negative log probability). Basically,
the system selects the sentence segmentation that has the
smallest total cost. Chang & Chen [1] developed a method for
word segmentation and part-of-speech tagging based on a first-order
hidden Markov model.
83
RESEARCH METHOD
The purpose of this study was to empirically develop a statistical
formula for identifying 2-character words as well as to
investigate the usefulness of various factors for identifying the
words. A sample of 400 sentences was randomly selected from 2
months (August and September 1995) of news articles from the
Xin Hua News Agency, comprising around 2.3 million characters.
The sample sentences were manually segmented. The
segmentation rules described in [10] were followed fairly closely.
More details of the manual segmentation process, especially with
regard to identifying meaningful words will be given in [5].
300 sentences were used for model building, i.e. using regression
analysis to develop a statistical formula. 100 sentences were set
aside for model validation to evaluate the formula developed in
the regression analysis. The sample sentences were broken up
into overlapping bigrams. In the regression analysis, the
dependent variable was whether a bigram was a two-character
word according to the manual segmentation. The independent
variables were various corpus statistics derived from the corpus
(2 months of news articles).
The types of frequency information investigated were:
1. Relative frequency of individual characters and bigrams
(character pairs) in the corpus, i.e. the number of times the
character or bigram occurs in the corpus divided by the total
number of characters in the corpus.
2. Document frequency of characters and bigrams, i.e. the
number of documents in the corpus containing the character
or bigram divided by the total number of documents in the
corpus.
3. Weighted document frequency of characters and bigrams. To
calculate the weighted document frequency of a character
string, each document containing the character string is
assigned a score equal to the square of the number of times
the character string occurs in the document. The scores for all
the documents containing the character string are then
summed and divided by the total number of documents in the
corpus to obtain the weighted document frequency for the
character string. The rationale is that if a character string
occurs several times within the same document, this is
stronger evidence that the character string constitutes a word,
than if the character string occurs once in several documents.
Two or more characters can occur together by chance in
several different documents. It is less likely for two
characters to occur together several times within the same
document by chance.
4. Local frequency in the form of within-document frequency of
characters and bigrams, i.e. the number of times the character
or bigram occurs in the document being segmented.
5. Contextual information. Frequency information of characters
adjacent to a bigram is used to help determine whether the
bigram is a word. For the character string
A B C D
, to determine whether the bigram BC is a word,
frequency information for the adjacent characters A and D, as
well as the overlapping bigrams AB and BC were considered.
6. Positional information. We studied whether the position of a
character string (at the beginning, middle or end of a
sentence) gave some indication of whether the character
string was a word.
The statistical model was developed using forward stepwise
logistic regression, using the Proc Logistic function in the SAS
v.6.12 statistical package for Windows. Logistic regression is an
appropriate regression technique when the dependent variable is
binary valued (takes the value 0 or 1). The formula developed
using logistic regression predicts the probability (more
accurately, the log of the odds) that a bigram is a meaningful
word.
In the stepwise regression, the threshold for a variable to enter
the model was set at the 0.001 significance level and the
threshold for retaining a variable in the model was set at 0.01. In
addition, preference was given to relative frequencies and local
frequencies because they are easier to calculate than document
frequencies and weighted document frequencies. Also, relative
frequencies are commonly used in previous studies.
Furthermore, a variable was entered in a model only if it gave a
noticeable improvement to the effectiveness of the model. During
regression analysis, the effectiveness of the model was estimated
using the measure of concordance that was automatically output
by the SAS statistical program. A variable was accepted into the
model only if the measure of concordance improved by at least
2% when the variable was entered into the model.
We evaluated the accuracy of the segmentation using measures of
recall and precision. Recall and precision in this context are
defined as follows:
Recall = No. of 2-character words identified in the automatic
segmentation that are correct
No. of 2-character words identified in the manual
segmentation
Precision = No. of 2-character words identified in the automatic
segmentation that are correct
No. of 2-character words identified in the automatic
segmentation
STATISTICAL FORMULAS DEVELOPED
The formula that was developed for 2-character words is as
follows. Given a character string
A B C D
, the
association strength for bigram BC is:
Assoc(BC) = 0.35 * log
2
freq(BC) + 0.37 * log
2
freq(A) +
0.32 log
2
freq(D) 0.36 * log
2
docfreq
wt
(AB)
0.29 * log
2
docfreq
wt
(CD) + 5.91
where freq refers to the relative frequency in the corpus and
docfreq
wt
refers to the weighted document frequency. We refer to
this formula as the contextual information formula. More details
of the regression model are given in Table 1.
The formula indicates that contextual information is helpful in
identifying word boundaries. A in the formula refers to the
character preceding the bigram that is being processed, whereas
D is the character following the bigram. The formula indicates
that if the character preceding and the character following the
bigram have high relative frequencies, then the bigram is more
likely to be a word.
84
Contextual information involving the weighted document
frequency was also found to be significant. The formula indicates
that if the overlapping bigrams AB and CD have high weighted
document frequencies, then the bigram BC is less likely to be a
word. We tried replacing the weighted document frequencies
with the unweighted document frequencies as well as the relative
frequencies. These were found to give a lower concordance score.
Even with docfreq (AB) and docfreq (CD) in the model, docfreq
wt
(AB) and docfreq
wt
(CD) were found to improve the model
significantly. However, local frequencies were surprisingly not
found to be useful in predicting 2-character words.
We investigated whether the position of the bigram in the
sentence was a significant factor. We included a variable to
indicate whether the bigram occurred just after a punctuation
mark or at the beginning of the sentence, and another variable to
indicate whether the bigram occurred just before a punctuation
mark or at the end of a sentence. The interaction between each of
the
position
variables and the various relative frequencies
were not significant. However, it was found that whether or not
the bigram was at the end of a sentence or just before a
punctuation mark was a significant factor. Bigrams at the end of
a sentence or just before a punctuation mark tend to be words.
However, since this factor did not improve the concordance score
by 2%, the effect was deemed too small to be included in the
model.
It should be noted that the contextual information used in the
study already incorporates some positional information. The
frequency of character A (the character preceding the bigram)
was given the value 0 if the bigram was preceded by a
punctuation mark or was at the beginning of a sentence.
Similarly, the frequency of character D (the character following
the bigram) was given the value 0 if the bigram preceded a
punctuation mark.
We also investigated whether the model would be different for
high and low frequency words. We included in the regression
analysis the interaction between the relative frequency of the
bigram and the other relative frequencies. The interaction terms
were not found to be significant. Finally, it is noted that the
coefficients for the various factors are nearly the same, hovering
around 0.34.
4.2
Improved Mutual Information Formula
In this study, the contextual information formula (CIF) was
evaluated by comparing it with the mutual information formula
(MIF). We wanted to find out whether the segmentation results
using the CIF was better than the segmentation results using the
MIF.
In the CIF model, the coefficients of the variables were
determined using regression analysis. If CIF was found to give
better results than MIF, it could be because the coefficients for
the variables in CIF had been determined empirically and not
because of the types of variables in the formula. To reject this
explanation, regression analysis was used to determine the
coefficients for the factors in the mutual information formula.
We refer to this new version of the formula as the improved
mutual information formula.
Given a character string
A B C D
, the improved
mutual information formula is:
Improved MI(BC) = 0.39 * log
2
freq(BC) - 0.28 * log
2
freq(B) 0
.23 log
2
freq(C) - 0.32
The coefficients are all close to 0.3. The formula is thus quite
similar to the mutual information formula, except for a
multiplier of 0.3.
SEGMENTATION ALGORITHMS
The automatic segmentation process has the following steps:
1.
The statistical formula is used to calculate a score for each
bigram to indicate its association strength (or how likely the
bigram is a word).
2.
A threshold value is then set and used to decide which
bigram is a word. If a bigram obtains a score above the
threshold value, then it is selected as a word. Different
threshold values can be used, depending on whether the user
prefers high recall or high precision.
3.
A segmentation algorithm is used to resolve conflict. If two
overlapping bigrams both have association scores above the
Parameter Standard Wald Pr > Standardized
Variable DF Estimate Error Chi-Square Chi-Square Estimate
INTERCPT
1
5.9144
0.1719
1184.0532
0.0001
.
Log freq(BC)
1
0.3502
0.0106
1088.7291
0.0001
0.638740
Log freq(A)
1
0.3730
0.0113
1092.1382
0.0001
0.709621
Log freq(D)
1
0.3171
0.0107
886.4446
0.0001
0.607326
Log docfreq
wt
(AB)
1
-0.3580
0.0111
1034.0948
0.0001
-0.800520
Log docfreq
wt
(CD)
1
-0.2867
0.0104
754.2276
0.0001
-0.635704
Note: freq refers to the relative frequency, and docfreq
wt
refers to the
weighted document frequency.
Association of Predicted Probabilities and Observed Responses
Concordant = 90.1% Somers' D = 0.803
Discordant = 9.8% Gamma = 0.803
Tied = 0.1% Tau-a = 0.295
(23875432 pairs) c = 0.901
Table 1. Final regression model for 2-character words
85
threshold value, then there is conflict or ambiguity. The
frequency of such conflicts will rise as the threshold value is
lowered. The segmentation algorithm resolves the conflict
and selects one of the bigrams as a word.
One simple segmentation algorithm is the forward match
algorithm. Consider the sentence
A B C D E
. The
segmentation process proceeds from the beginning of the
sentence to the end. First the bigram AB is considered. If the
association score is above the threshold, then AB is taken as a
word, and the bigram CD is next considered. If the association
score of AB is below the threshold, the character A is taken as a
1-character word. And the bigram BC is next considered. In
effect, if the association score of both AB and BC are above
threshold, the forward match algorithm selects AB as a word and
not BC.
The forward match method for resolving ambiguity is somewhat
arbitrary and not satisfactory. When overlapping bigrams exceed
the threshold value, it simply decides in favour of the earlier
bigram. Another segmentation algorithm was developed in this
study which we refer to as the comparative forward match
algorithm. This has an additional step:
If 2 overlapping bigrams AB and BC both have scores above
the threshold value then their scores are compared. If AB has a
higher value, then it is selected as a word, and the program
next considers the bigrams CD and DE. On the other hand, if
AB has a lower value, then character A is selected as a 1-character
word, and the program next considers bigrams BC
and CD.
The comparative forward match method (CFM) was compared
with the forward match method (FM) by applying them to the 3
statistical formulas (the contextual information formula, the
mutual information formula and the improved mutual
information formula). One way to compare the effectiveness of
the 2 segmentation algorithms is by comparing their precision
figures at the same recall levels. The precision figures for
selected recall levels are given in Table 2. The results are based
on the sample of 300 sentences.
The comparative forward match algorithm gave better results for
the mutual information and improved mutual information
formulas especially at low threshold values when a large
number of conflicts are likely. Furthermore, for the forward
match method, the recall didn
t go substantially higher than
80% even at low threshold values.
For the contextual information formula, the comparative forward
match method did not perform better than forward match, except
at very low threshold values when the recall was above 90%.
This was expected because the contextual information formula
already incorporates information about neighboring characters
within the formula. The formula gave very few conflicting
segmentations. There were very few cases of overlapping
bigrams both having association scores above the threshold
except when threshold values were below 1.5.
EVALUATION
In this section we compare the effectiveness of the contextual
information formula with the mutual information formula and
the improved mutual information formula using the 100
sentences that had been set aside for evaluation purposes. For the
contextual information formula, the forward match segmentation
algorithm was used. The comparative forward match algorithm
was used for the mutual information and the improved mutual
information formulas.
The three statistical formulas were compared by comparing their
precision figures at 4 recall levels at 60%, 70%, 80% and 90%.
For each of the three statistical formulas, we identified the
threshold values that would give a recall of 60%, 70%, 80% and
90%. We then determined the precision values at these threshold
values to find out whether the contextual information formula
gave better precision than the other two formulas at 60%, 70%,
80% and 90% recall. These recall levels were selected because a
recall of 50% or less is probably unacceptable for most
applications.
The precision figures for the 4 recall levels are given in Table 3.
The recall-precision graphs for the 3 formulas are given in Fig. 1.
The contextual information formula substantially outperforms
the mutual information and the improved mutual information
formulas. At the 90% recall level, the contextual information
Precision
Recall
Comparative
Forward Match
Forward
Match
Improvement
Mutual Information
90%
51%
80%
52%
47%
5%
70%
53%
51%
2%
60%
54%
52%
2%
Improved Mutual Information
90%
51%
80%
53%
46%
7%
70%
54%
52%
2%
60%
55%
54%
1%
Contextual Information Formula
90%
55%
54%
1%
80%
62%
62%
0%
70%
65%
65%
0%
60%
68%
68%
0%
Table 2. Recall and precision values for the comparative
forward match segmentation algorithm vs. forward match
Precision
Recall
Mutual
Information
Improved Mutual
Information
Contextual
Information
90%
57% (0.0)
57% (-2.5)
61% (-1.5)
80%
59% (3.7)
59% (-1.5)
66% (-0.8)
70%
59% (4.7)
60% (-1.0)
70% (-0.3)
60%
60% (5.6)
62% (-0.7)
74% (0.0)
* Threshold values are given in parenthesis.
Table 3. Recall and precision for three statistical formulas
86
formula was better by about 4%. At the 60% recall level, it
outperformed the mutual information formula by 14% (giving a
relative improvement of 23%). The results also indicate that the
improved mutual information formula does not perform better
than the mutual information formula.
6.2
Statistical Test of Significance
In order to perform a statistical test, recall and precision figures
were calculated for each of the 100 sentences used in the
evaluation. The average recall and the average precision across
the 100 sentences were then calculated for the three statistical
formulas. In the previous section, recall and precision were
calculated for all the 100 sentences combined. Here, recall and
precision were obtained for individual sentences and then the
average across the 100 sentences was calculated. The average
precision for 60%, 70%, 80% and 90% average recall are given
in Table 4.
For each recall level, an analysis of variance with repeated
measures was carried out to find out whether the differences in
precision were significant. Pairwise comparisons using Tukey s
HSD test was also carried out. The contextual information
formula was significantly better (
=0.001) than the mutual
information and the improved mutual information formulas at all
4 recall levels. The improved mutual information formula was
not found to be significantly better than mutual information.
ANALYSIS OF ERRORS
The errors that arose from using the contextual information
formula were analyzed to gain insights into the weaknesses of
the model and how the model can be improved. There are two
types of errors: errors of commission and errors of omission.
Errors of commission are bigrams that are identified by the
automatic segmentation to be words when in fact they are not
(according to the manual segmentation). Errors of omission are
bigrams that are not identified by the automatic segmentation to
be words but in fact they are.
The errors depend of course on the threshold values used. A high
threshold (e.g. 1.0) emphasizes precision and a low threshold
(e.g. 1.0) emphasizes recall. 50 sentences were selected from
the 100 sample sentences to find the distribution of errors at
different regions of threshold values.
Association Score >1.0 (definite errors)
will
through
telegraph [on the] day [31 July]
Association Score Between 1.0 and 1.0
(borderline errors)
still
to
will be
people etc.
I want
Person's name
(
)
Wan Wen Ju
Place name
(
)
a village name in China
(
)
Canada
Name of an organization/institution
(
)
Xin Hua Agency
(
)
The State Department
Table 6. Bigrams incorrectly identified as words
55
60
65
70
75
60
65
70
75
80
85
90
95
Recall(%)
Precision(%)
Contextual information
Mutual information
Improved mutual
information
Fig. 1. Recall-precision graph for the three statistical
models.
Association Score>1.0 (definite errors)
(
)
university (agricultural
university)
(
)
geology (geologic age)
(
)
plant (upland plant)
(
)
sovereignty (sovereign state)
Association Score Between 1.0 and 1.0
(borderline errors)
(
)
statistics (statistical data)
(
)
calamity (natural calamity)
(
)
resources (manpower resources)
(
)
professor (associate professor)
(
)
poor (pauperization)
(
)
fourteen (the 14
th
day)
(
)
twenty (twenty pieces)
Table 5. Simple words that are part of a longer
meaningful word
Avg Precision
Avg
Recall
Mutual
Information
Improved Mutual
Information
Contextual
Information
90%
57% (1.0)
58% (-2.3)
61% (-1.5)
80%
60% (3.8)
60% (-1.4)
67% (-0.7)
70%
59% (4.8)
60% (-1.0)
70% (-0.3)
60%
60% (5.6)
63% (-0.6)
73% (0.0)
* Threshold values are given in parenthesis.
Table 4. Average recall and average precision for the three
statistical formulas
87
We divide the errors of commission (bigrams that are incorrectly
identified as words by the automatic segmentation) into 2 groups:
1.
Definite errors: bigrams with association scores above 1.0 but
are not words
2.
Borderline errors: bigrams with association scores between
1.0 and 1.0 and are not words
We also divide the errors of omission (bigrams that are words
but are not identified by the automatic segmentation) into 2
groups:
1. Definite errors: bigrams with association scores below 1.0
but are words
2. Borderline errors: bigrams with association scores between
1.0 and 1.0 and are words.
7.1
Errors of Commission
Errors of commission can be divided into 2 types:
1.
The bigram is a simple word that is part of a longer
meaningful word.
2.
The bigram is not a word (neither simple word nor
meaningful word).
Errors of the first type are illustrated in Table 5. The words
within parenthesis are actually meaningful words but segmented
as simple words (words on the left). The words lose part of the
meaning when segmented as simple words. These errors
occurred mostly with 3 or 4-character meaningful words.
Errors of the second type are illustrated in Table 6. Many of the
errors are caused by incorrectly linking a character with a
function word or pronoun. Some of the errors can easily be
removed by using a list of function words and pronouns to
identify these characters.
7.2
Errors of Omission
Examples of definite errors of omission (bigrams with
association scores below 1.0 but are words) are given in Table
7. Most of the errors are rare words and time words. Some are
ancient names, rare and unknown place names, as well as
technical terms. Since our corpus comprises general news
articles, these types of words are not frequent in the corpus. Time
words like dates usually have low association values because
they change everyday! These errors can be reduced by
incorporating a separate algorithm for recognizing them.
The proportion of errors of the various types are given in Table 8.
CONCLUSION
A new statistical formula for identifying 2-character words in
Chinese text, called the contextual information formula, was
developed empirically using regression analysis. The focus was
on identifying meaningful words (including multi-word terms
and idioms) rather than simple words. The formula was found to
give significantly and substantially better results than the mutual
information formula.
Contextual information in the form of the frequency of characters
that are adjacent to the bigram being processed as well as the
weighted document frequency of the overlapping bigrams were
found to be significant factors for predicting the probablity that
the bigram constitutes a word. Local information (e.g. the
number of times the bigram occurs in the document being
segmented) and the position of the bigram in the sentence were
not found to be useful in determining words.
Of the bigrams that the formula erroneously identified as words,
about 80% of them were actually simple words. Of the rest,
many involved incorrect linking with a function words. Of the
words that the formula failed to identify as words, more than a
third of them were rare words or time words. The proportion of
rare words increased as the threshold value used was lowered.
These rare words cannot be identified using statistical
techniques.
This study investigated a purely statistical approach to text
Association Score between -1.0 and -2.0
the northern section of a construction project
fragments of ancient books
Association Score < -2.0
September
3rd day
(name of a district in China )
(name of an institution)
the Book of Changes
Table 7. 2-character words with association score
below -1.0
Errors of Commission
Association score > 1.0
(No. of errors=34)
Borderline Cases
Association score: 1.0 to1.0
(No. of cases: 210)
Errors of Omission
Association score < 1.0
Association score:
1.0 to 2.0
(No. of errors=43)
Association score
< 2.0
(No. of errors=22)
Simple
words
82.3%
Not words
17.7%
Simple
words
55.2%
Not
words
20.5%
Meaningful
words
24.3%
Rare words
& time
words
23.2%
Others
76.8%
Rare words
& time
words
63.6%
Others
36.4%
Table 8. Proportion of errors of different types
88
segmentation. The advantage of the statistical approach is that it
can be applied to any domain, provided that the document
collection is sufficiently large to provide frequency information.
A domain-specific dictionary of words is not required. In fact, the
statistical formula can be used to generate a shortlist of candidate
words for such a dictionary. On the other hand, the statistical
method cannot identify rare words and proper names. It is also
fooled by combinations of function words that occur frequently
and by function words that co-occur with other words.
It is well-known that a combination of methods is needed to give
the best segmentation results. The segmentation quality in this
study can be improved by using a list of function words and
segmenting the function words as single character words. A
dictionary of common and well-known names (including names
of persons, places, institutions, government bodies and classic
books) could be used by the system to identify proper names that
occur infrequently in the corpus. Chang et al. [2] developed a
method for recognizing proper nouns using a dictionary of family
names in combination with a statistical method for identifying
the end of the name. An algorithm for identifying time and dates
would also be helpful. It is not clear whether syntactic processing
can be used to improve the segmentation results substantially.
Our current work includes developing statistical formulas for
identifying 3 and 4-character words, as well as investigating
whether the statistical formula developed here can be used with
other corpora. The approach adopted in this study can also be
used to develop statistical models for identifying multi-word
terms in English text. It would be interesting to see whether the
regression model developed for English text is similar to the one
developed in this study for Chinese text. Frantzi, Ananiadou &
Tsujii [7], using a different statistical approach, found that
contextual information could be used to improve the
identification of multi-word terms in English text.
REFERENCES
[1] Chang, C.-H., and Chen, C.-D. A study of integrating
Chinese word segmentation and part-of-speech tagging.
Communications of COLIPS, 3, 1 (1993), 69-77.
[2] Chang, J.-S., Chen, S.-D., Ker, S.-J., Chen, Y., and Liu, J.S.
A multiple-corpus approach to recognition of proper names
in Chinese texts. Computer Processing of Chinese and
Oriental Languages, 8, 1 (June 1994), 75-85.
[3] Church, K.W., and Hanks, P. Word association norms,
mutual information and lexicography. In Proceedings of the
27
th
Annual Meeting of the Association for Computational
Linguistics (Vancouver, June 1989), 76-83.
[4] Dai, J.C., and Lee, H.J. A generalized unification-based LR
parser for Chinese. Computer Processing of Chinese and
Oriental Languages, 8, 1 (1994), 1-18.
[5] Dai, Y. Developing a new statistical method for Chinese
text segmentation. (Master
s thesis in preparation)
[6] Damerau, F.J. Generating and evaluating domain-oriented
multi-word terms from texts. Information Processing &
Management, 29, 4 (1993), 433-447.
[7] Frantzi, K.T., Ananiadou, S., and Tsujii, J. The C-value/NC
-value method of automatic recognition for multi-word
terms. In C. Nikolaou and C. Stephanidis (eds.),
Research and Advanced Technology for Digital Libraries,
2
nd
European Conference, ECDL
98 (Heraklion, Crete,
September 1998), Springer-Verlag, 585-604.
[8] Liang, N.Y. The knowledge of Chinese words segmentation
[in Chinese]. Journal of Chinese Information Processing, 4,
2 (1990), 42-49.
[9] Liu, I.M. Descriptive-unit analysis of sentences: Toward a
model natural language processing. Computer Processing of
Chinese & Oriental Languages, 4, 4 (1990), 314-355.
[10] Liu, Y., Tan, Q., and Shen, X.K. Xin xi chu li yong xian dai
han yu fen ci gui fan ji zi dong fen ci fang fa [
Modern
Chinese Word Segmentation Rules and Automatic Word
Segmentation Methods for Information Processing
]. Qing
Hua University Press, Beijing, 1994.
[11] Lua, K.T. Experiments on the use of bigram mutual
information in Chinese natural language processing.
Presented at the 1995 International Conference on Computer
Processing of Oriental Languages (ICCPOL) (Hawaii,
November 1995). Available: http://137.132.89.143/luakt/
publication.html
[12] Lua, K.T. From character to word - An application of
information theory. Computer Processing of Chinese &
Oriental Languages, 4, 4 (1990), 304-312.
[13] Lua, K.T., and Gan, G.W. An application of information
theory in Chinese word segmentation. Computer Processing
of Chinese & Oriental Languages, 8, 1 (1994), 115-124.
[14] Nie, J.Y., Hannan, M.L., and Jin, W.Y. Unknown word
detection and segmentation of Chinese using statistical and
heuristic knowledge. Communications of COLIPS, 5, 1&2
(1995), 47-57.
[15] Ogawa, Y., and Matsuda, T. Overlapping statistical word
indexing: A new indexing method for Japanese text. In
Proceedings of the 20th Annual International ACM SIGIR
Conference on Research and Development in Information
Retrieval (Philadelphia, July 1997), ACM, 226-234.
[16] Sproat, R., and Shih, C.L. A statistical method for finding
word boundaries in Chinese text. Computer Processing of
Chinese & Oriental Languages, 4, 4 (1990), 336-351.
[17] Sproat, R., Shih, C., Gale, W., and Chang, N. A stochastic
finite-state word-segmentation algorithm for Chinese.
Computational Lingustics, 22, 3 (1996), 377-404.
[18] Tung, C.-H., and Lee, H.-J. Identification of unknown words
from a corpus. Computer Processing of Chinese and
Oriental Languages, 8 (Supplement, Dec. 1994), 131-145.
[19] Wu, Z., and Tseng, G. ACTS: An automatic Chinese text
segmentation system for full text retrieval. Journal of the
American Society for Information Science, 46, 2 (1995), 83-96
.
[20] Yeh, C.L., and Lee, H.J. Rule-based word identification for
mandarin Chinese sentences: A unification approach.
Computer Processing of Chinese and Oriental Languages, 5,
2 (1991), 97-118.
89 | logistic regression;statistical formula;word boundary identification;Chinese text segmentation;word boundary;natural language processing;mutual information;regression model;contextual information;multi-word terms |
150 | Preventing Attribute Information Leakage in Automated Trust Negotiation | Automated trust negotiation is an approach which establishes trust between strangers through the bilateral, iterative disclosure of digital credentials. Sensitive credentials are protected by access control policies which may also be communicated to the other party. Ideally, sensitive information should not be known by others unless its access control policy has been satisfied. However, due to bilateral information exchange, information may flow to others in a variety of forms, many of which cannot be protected by access control policies alone. In particular, sensitive information may be inferred by observing negotiation participants' behavior even when access control policies are strictly enforced. In this paper, we propose a general framework for the safety of trust negotiation systems. Compared to the existing safety model, our framework focuses on the actual information gain during trust negotiation instead of the exchanged messages. Thus, it directly reflects the essence of safety in sensitive information protection. Based on the proposed framework, we develop policy databases as a mechanism to help prevent unauthorized information inferences during trust negotiation. We show that policy databases achieve the same protection of sensitive information as existing solutions without imposing additional complications to the interaction between negotiation participants or restricting users' autonomy in defining their own policies. | INTRODUCTION
Automated trust negotiation (ATN) is an approach to
access control and authentication in open, flexible systems
such as the Internet. ATN enables open computing by as-Permission
to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
CCS'05, November 711, 2005, Alexandria, Virginia, USA.
Copyright 2005 ACM 1-59593-226-7/05/0011 ...
$
5.00.
signing an access control policy to each resource that is to
be made available to entities from different domains. An
access control policy describes the attributes of the entities
allowed to access that resource, in contrast to the traditional
approach of listing their identities. To satisfy an access control
policy, a user has to demonstrate that they have the
attributes named in the policy through the use of digital
credentials. Since one's attributes may also be sensitive, the
disclosure of digital credentials is also protected by access
control policies.
A trust negotiation is triggered when one party requests
access to a resource owned by another party. Since each
party may have policies that the other needs to satisfy, trust
is established incrementally through bilateral disclosures of
credentials and requests for credentials, a characteristic that
distinguishes trust negotiation from other trust establishment
approaches [2, 11].
Access control policies play a central role in protecting
privacy during trust negotiation. Ideally, an entity's sensitive
information should not be known by others unless they
have satisfied the corresponding access control policy. However
, depending on the way two parties interact with each
other, one's private information may flow to others in various
forms, which are not always controlled by access control
policies. In particular, the different behaviors of a negotiation
participant may be exploited to infer sensitive information
, even if credentials containing that information are
never directly disclosed.
For example, suppose a resource's policy requires Alice to
prove a sensitive attribute such as employment by the CIA.
If Alice has this attribute, then she likely protects it with an
access control policy. Thus, as a response, Alice will ask the
resource provider to satisfy her policy. On the other hand,
if Alice does not have the attribute, then a natural response
would be for her to terminate the negotiation since there is
no way that she can access the resource. Thus, merely from
Alice's response, the resource provider may infer with high
confidence whether or not Alice is working for the CIA, even
though her access control policy is strictly enforced.
The problem of unauthorized information flow in ATN has
been noted by several groups of researchers [20, 22, 27]. A
variety of approaches have been proposed, which mainly fall
into two categories. Approaches in the first category try to
"break" the correlation between different information. Intuitively
, if the disclosed policy for an attribute is independent
from the possession of the attribute, then the above inference
is impossible. A representative approach in this category is
by Seamons et al. [20], where an entity possessing a sensi-36
tive credential always responds with a cover policy of f alse
to pretend the opposite. Only when the actual policy is satisfied
by the credentials disclosed by the opponent will the
entity disclose the credential. Clearly, since the disclosed
policy is always f alse, it is not correlated to the possession
of the credential. One obvious problem with this approach,
however, is that a potentially successful negotiation may fail
because an entity pretends to not have the credential.
Approaches in the second category aim to make the correlation
between different information "safe", i.e., when an
opponent is able to infer some sensitive information through
the correlation, it is already entitled to know that information
. For example, Winsborough and Li [23] proposed the
use of acknowledgement policies ("Ack policies" for short) as
a solution. Their approach is based on the principle "discuss
sensitive topics only with appropriate parties". Therefore,
besides an access control policy P , Alice also associates an
Ack policy P
Ack
with a sensitive attribute A. Intuitively,
P
Ack
determines when Alice can tell others whether or not
she has attribute A. During a negotiation, when the attribute
is requested, the Ack policy P
Ack
is first sent back
as a reply. Only when P
Ack
is satisfied by the other party,
will Alice disclose whether or not she has A and may then
ask the other party to satisfy the access control policy P .
In order to prevent additional correlation introduced by Ack
policies, it is required that all entities use the same Ack policy
to protect a given attribute regardless of whether or not
they have A. In [23], Winsborough and Li also formally defined
the safety requirements in trust negotiation based on
Ack policies.
Though the approach of Ack policies can provide protection
against unauthorized inferences, it has a significant disadvantage
. One benefit of automated trust negotiation is
that it gives each entity the autonomy to determine the appropriate
protection for its own resources and credentials.
The perceived sensitivity of possessing an attribute may be
very different for different entities. For example, some may
consider the possession of a certificate showing eligibility for
food stamps highly sensitive, and thus would like to have a
very strict Ack policy for it. Some others may not care as
much, and have a less strict Ack policy, because they are
more concerned with their ability to get services than their
privacy. The Ack Policy system, however, requires that all
entities use the same Ack policy for a given attribute, which.
deprives entities of the autonomy to make their own decisions
. This will inevitably be over-protective for some and
under-protective for others. And either situation will result
in users preferring not to participate in the system.
In this paper, we first propose a general framework for safe
information flow in automated trust negotiation. Compared
with that proposed by Winsborough and Li, our framework
focuses on modeling the actual information gain caused by
information flow instead of the messages exchanged. Therefore
it directly reflects the essence of safety in sensitive information
protection. Based on this framework, we propose
policy databases as a solution to the above problem. Policy
databases not only prevent unauthorized inferences as described
above but also preserve users' autonomy in deciding
their own policies. In order to do this, we focus on severing
the correlation between attributes and policies by introducing
randomness, rather than adding additional layers or
fixed policies as in the Ack Policy system. In our approach,
there is a central database of policies for each possession
sensitive attribute. Users who possess the attribute submit
their policies to the database anonymously. Users who do
not possess the attribute can then draw a policy at random
from the database. The result of this process is that the
distributions of policies for a given possession sensitive attribute
are identical for users who have the attribute and
users who do not. Thus, an opponent cannot infer whether
or not users possess the attribute by looking at their policies.
The rest of the paper is organized as follows. In section
2, we propose a formal definition of safety for automated
trust negotiation. In section 3, we discuss the specifics of
our approach, including what assumptions underlie it, how
well it satisfies our safety principle, both theoretically and
in practical situations, and what practical concerns to implementing
it exist. Closely related work to this paper is
reported in section 4. We conclude this paper in section 5
SAFETY IN TRUST NEGOTIATION
In [23], Winsborough and Li put forth several definitions
of safety in trust negotiation based on an underlying notion
of indistinguishability. The essence of indistinguishability is
that if an opponent is given the opportunity to interact with
a user in two states corresponding to two different potential
sets of attributes, the opponent cannot detect a difference
in those sets of attributes based on the messages sent. In
the definition of deterministic indistinguishability, the messages
sent in the two states must be precisely the same. In
the definition of probabilistic indistinguishability, they must
have precisely the same distribution.
These definitions, however, are overly strict. To determine
whether or not a given user has a credential, it is not sufficient
for an opponent to know that the user acts differently
depending on whether or not that user has the credential:
the opponent also has to be able to figure out which behavior
corresponds to having the credential and which corresponds
to lacking the credential. Otherwise, the opponent has not
actually gained any information about the user.
Example 1. Suppose we have a system in which there
is only one attribute and two policies, p
1
and p
2
. Half of
the users use p
1
when they have the attribute and p
2
when
they do not. The other half of the users use p
2
when they
have the attribute and p
1
when they do not. Every user's
messages would be distinguishable under the definition of indistinguishability
presented in [23] because for each user the
distribution of messages is different. However, if a fraction
r of the users have the attribute and a fraction 1 - r do not,
then
1
2
r +
1
2
(1 - r) =
1
2
of the users display policy p
1
and
the other half of the users display policy p
2
. As such the
number of users displaying p
1
or p
2
does not change as r
changes. Hence, they are independent. Since the policy displayed
is independent of the attribute when users are viewed
as a whole, seeing either policy does not reveal any information
about whether or not the user in question has the
attribute.
As such, Winsborough and Li's definitions of indistinguishability
restrict a number of valid systems where a given
user will act differently in the two cases, but an opponent
cannot actually distinguish which case is which. In fact,
their definitions allow only systems greatly similar to the
Ack Policy system that they proposed in [22]. Instead we
propose a definition of safety based directly on information
37
gain instead of the message exchange sequences between the
two parties.
Before we formally define safety, we first discuss what
safety means informally. In any trust negotiation system,
there is some set of objects which are protected by policies
. Usually this includes credentials, information about
attribute possession, and sometimes even some of the policies
in the system. All of these can be structured as digital
information, and the aim of the system is to disclose that
information only to appropriate parties.
The straight-forward part of the idea of safety is that an
object's value should not be revealed unless its policy has
been satisfied. However, we do not want to simply avoid
an object's value being known with complete certainty, but
also the value being guessed with significant likelihood.
As such, we can define the change in safety as the change
in the probability of guessing the value of an object.
If
there are two secrets, s
1
and s
2
, we can define the conditional
safety of s
1
upon the disclosure of s
2
as the conditional
probability of guessing s
1
given s
2
. Thus, we define
absolute safety in a system as being the property that no
disclosure of objects whose policies have been satisfied results
in any change in the probability of guessing the value
of any object whose policy has not been satisfied regardless
of what inferences might be possible.
There exists a simple system which can satisfy this level of
safety, which is the all-or-nothing system, a system in which
all of every user's objects are required to be protected by
a single policy which is the same for all users. Clearly in
such a system there are only two states, all objects revealed
or no objects revealed. As such, there can be no inferences
between objects which are revealed and objects which are
not. This system, however, has undesirable properties which
outweigh its safety guarantees, namely the lack of autonomy,
flexibility, and fine-grained access control. Because of the
necessity of protecting against every possible inference which
could occur, it is like that any system which achieves ideal
safety would be similarly inflexible.
Since there have been no practical systems proposed which
meet the ideal safety condition, describing ideal safety is not
sufficient unto itself. We wish to explore not just ideal safety,
but also safety relative to certain types of attacks. This will
help us develop a more complete view of safety in the likely
event that no useful system which is ideally safe is found.
If a system does not have ideal safety, then there must
be some inferences which can cause a leakage of information
between revealed objects and protected objects. But
this does not mean that every single object revealed leaks
information about every protected object. As such, we can
potentially describe what sort of inferences a system does
protect against. For example, Ack Policy systems are moti-vated
by a desire to prevent inferences from a policy to the
possession of the attribute that it protects. Inferences from
one attribute to another are not prevented by such a system
(for example, users who are AARP members are more likely
to be retired than ones who are not). Hence, it is desirable
to describe what it means for a system to be safe relative to
certain types of inferences.
Next we present a formal framework to model safety in
trust negotiation. The formalism which we are using in this
paper is based on that used by Winsborough and Li, but is
substantially revised.
2.0.1
Trust Negotiation Systems
A Trust Negotiation System is comprised of the following
elements:
A finite set, K, of principals, uniquely identified by a randomly
chosen public key, P ub
k
. Each principal knows the
associated private key, and can produce a proof of identity.
A finite set, T , of attributes. An attribute is something
which each user either possesses or lacks. An example would
be status as a licensed driver or enrollment at a university.
A set, G, of configurations, each of which is a subset of T .
If a principal k is in a configuration g G, then k possesses
the attributes in g and no other attributes.
A set, P, of possible policies, each of which is a logical proposition
comprised of a combination of and, or, and attributes
in T . We define an attribute in a policy to be true with respect
to a principal k if k has that attribute. We consider
all logically equivalent policies to be the same policy.
Objects. Every principal k has objects which may be protected
which include the following:
- A set, S, of services provided by a principal. Every principal
offers some set of services to all other principals. These
services are each protected by some policy, as we will describe
later. A simple service which each principal offers is
a proof of attribute possession. If another principal satisfies
the appropriate policy, the principal will offer some proof
that he holds the attribute. This service is denoted s
t
for
any attribute t T .
- A set, A, of attribute status objects. Since the set of all
attributes is already known, we want to protect the information
about whether or not a given user has an attribute.
As such we formally define A as a set of boolean valued random
variables, a
t
. The value of a
t
for a principal k, which
we denote a
t
(k) is defined to be true if k possesses t T
and false otherwise. Thus A = {a
t
|t T }.
- A set, Q of policy mapping objects. A system may desire
to protect an object's policy either because of correlations
between policies and sensitive attributes or because in some
systems the policies themselves may be considered sensitive.
Similar to attribute status objects, we do not protect a policy
, per se, but instead the pairing of a policy with what
it is protecting. As such, each policy mapping object is a
random variable q
o
with range P where o is an object. The
value of q
o
for a given principal k, denoted q
o
(k) is the policy
that k has chosen to protect object o.
Every system should define which objects are protected. It
is expected that all systems should protect the services, S,
and the attribute status objects, A. In some systems, there
will also be policies which protect policies. Thus protected
objects may also include a subset of Q. We call the set of
protected objects O, where O S A Q. If an object is
not protected, this is equivalent to it having a policy equal
to true.
For convenience, we define Q
X
to be the members of Q
which are policies protecting members of X , where X is a
set of objects. Formally, Q
X
= {q
o
Q|o X }.
Some subset of the information objects are considered to
be sensitive objects. These are the objects about which we
want an opponent to gain no information unless they have
satisfied that object's policy. Full information about any
object, sensitive or insensitive, is not released by the system
38
until its policy has been satisfied, but it is acceptable for
inferences to cause the leakage of information which is not
considered sensitive.
A set, N , of negotiation strategies. A negotiation strategy
is the means that a principal uses to interact with other
principals. Established strategies include the eager strategy
[24] and the trust-target graph strategy [22].
A negotiation
strategy, n, is defined as an interactive, deterministic,
Turing-equivalent computational machine augmented by a
random tape. The random tape serves as a random oracle
which allows us to discuss randomized strategies.
A negotiation strategy takes as initial input the public knowledge
needed to operate in a system, the principal's attributes,
its services, and the policies which protect its objects. It
then produces additional inputs and outputs by interacting
with other strategies. It can output policies, credentials,
and any additional information which is useful. We do not
further define the specifics of the information communicated
between strategies except to note that all the strategies in a
system should have compatible input and output protocols.
We refrain from further specifics of strategies since they are
not required in our later discussion.
An adversary, M , is defined as a set of principals coordinating
to discover the value of sensitive information objects
belonging to some k M .
Preventing this discovery is
the security goal of a trust negotiation system. We assume
that adversaries may only interact with principals through
trust negotiation and are limited to proving possession of
attributes which they actually possess. In other words, the
trust negotiation system provides a means of proof which is
resistant to attempts at forgery.
A set, I, of all inferences. Each inference is a minimal subset
of information objects such that the joint distribution of the
set differs from the product of the individual distributions
of the items in the set.
1
These then allow a partitioning, C, of the information objects
into inference components. We define a relation such
that o
1
o
2
iff i I|o
1
, o
2
i. C is the transitive closure
of .
In general, we assume that all of the information objects
in our framework are static. We do not model changes in a
principal's attribute status or policies. If such is necessary,
the model would need to be adapted.
It should also be noted that there is an additional constraint
on policies that protect policies which we have not
described.
This is because in most systems there is a way
to gain information about what a policy is, which is to satisfy
it. When a policy is satisfied, this generally results in
some service being rendered or information being released.
As such, this will let the other party know that they have
satisfied the policy for that object. Therefore, the effective
policy protecting a policy status object must be the logical
or of the policy in the policy status object and the policy
which protects it.
2.0.2
The Ack Policy System
To help illustrate the model, let us describe how the Ack
Policy system maps onto the model. The mapping of oppo-1
A system need not define the particulars of inferences, but
should discuss what sort of inferences it can deal with, and
hence what sort of inferences are assumed to exist.
nents, the sets of principals, attributes, configurations, and
policies in the Ack Policy system is straightforward.
In an Ack Policy system, any mutually compatible set of
negotiation strategies is acceptable. There are policies for
protecting services, protecting attribute status objects, and
protecting policies which protect attribute proving services.
As such, the set of protected objects, O = S A Q
S
.
According to the definition of the Ack Policy system, for
a given attribute, the policy that protects the proof service
for that attribute is protected by the same policy that protects
the attribute status object. Formally, t T , k
K, q
a
t
(k) = q
q
st
(k). Further, the Ack policy for an attribute
is required to be the same for all principals. Thus we know
t T p Pk K|q
a
t
(k) = p.
Two basic assumptions about the set of inferences, I, exist
in Ack Policy systems, which also lead us to conclusions
about the inference components, C. It is assumed that inferences
between the policy which protects the attribute proving
service, q
s
t
(k), and the attribute status object, a
t
(k),
exist. As such, those two objects should always be in the
same inference component. Because Ack Policies are uniform
for all principals, they are uncorrelated to any other
information object and they cannot be part of any inference.
Hence, each Ack Policy is in an inference component of its
own.
2.0.3
Safety in Trust Negotiation Systems
In order to formally define safety in trust negotiation, we
need to define the specifics of the opponent. We need to
model the potential capabilities of an opponent and the information
initially available to the opponent. Obviously, no
system is safe against an opponent with unlimited capabilities
or unlimited knowledge.
As such, we restrict the opponent to having some tactic,
for forming trust negotiation messages, processing responses
to those messages, and, finally, forming a guess about the
values of unrevealed information objects. We model the tactic
as an interactive, deterministic, Turing-equivalent computational
machine. This model is a very powerful model,
and we argue that it describes any reasonable opponent.
This model, however, restricts the opponent to calculating
things which are computable from its input and implies that
the opponent behaves in a deterministic fashion.
The input available to the machine at the start is the
knowledge available to the opponent before any trust negotiation
has taken place. What this knowledge is varies
depending on the particulars of a trust negotiation system.
However, in every system this should include the knowledge
available to the principals who are a part of the opponent
, such as their public and private keys and their credentials
. And it should also include public information such
as how the system works, the public keys of the attribute
authorities, and other information that every user knows.
In most systems, information about the distribution of attributes
and credentials and knowledge of inference rules
should also be considered as public information.
All responses
from principals in different configurations become
available as input to the tactic as they are made. The tactic
must output both a sequence of responses and, at the end,
guesses about the unknown objects of all users.
We observe that an opponent will have probabilistic knowledge
about information objects in a system. Initially, the
probabilities will be based only on publicly available knowl-39
edge, so we can use the publicly available knowledge to describe
the a priori probabilities.
For instance, in most systems, it would be reasonable to
assume that the opponent will have knowledge of the odds
that any particular member of the population has a given attribute
. Thus, if a fraction h
t
of the population is expected
to possess attribute t T , the opponent should begin with
an assumption that some given principal has a h
t
chance of
having attribute t. Hence, h
t
represents the a priori probability
of any given principal possessing t. Note that we
assume that the opponent only knows the odds of a given
principal having an attribute, but does not know for certain
that a fixed percentage of the users have a given attribute.
As such, knowledge about the value of an object belonging
to some set of users does not imply any knowledge about
the value of objects belonging to some other user.
Definition 1. A trust negotiation system is safe relative
to a set of possible inferences if for all allowed mappings between
principals and configurations there exists no opponent
which can guess the value of sensitive information objects
whose security policies have not been met with odds better
than the a priori odds over all principals which are not in
the opponent, over all values of all random tapes, and over
all mappings between public key values and principals.
Definition 1 differs from Winsborough and Li's definitions
in several ways. The first is that it is concerned with multiple
users. It both requires that the opponent form guesses
over all users and allows the opponent to interact with all
users. Instead of simply having a sequence of messages sent
to a single principal, the tactic we have defined may interact
with a variety of users, analyzing incoming messages, and
then use them to form new messages. It is allowed to talk
to the users in any order and to interleave communications
with multiple users, thus it is more general than those in [23].
The second is that we are concerned only with the information
which the opponent can glean from the communication,
not the distribution of the communication itself. As such,
our definition more clearly reflects the fundamental idea of
safety.
We next introduce a theorem which will be helpful in proving
the safety of systems.
Theorem 1. There exists no opponent which can beat the
a priori odds of guessing the value of an object, o, given
only information about objects which are not in the same
inference component as o, over all principals not in M and
whose policy for o M cannot satisfy, over all random tapes,
and over all mappings between public keys and principals.
The formal proof for this theorem can be found in Appendix
A. Intuitively, since the opponent only gains information
about objects not correlated to o, its guess of the
value of o is not affected.
With theorem 1, let us take a brief moment to prove
the safety of the Ack Policy systems under our framework.
Specifically, we examine Ack Policy systems in which the
distribution of strategies is independent of the distributions
of attributes, an assumption implicitly made in [23]. In Ack
Policy systems the Ack Policy is a policy which protects
two objects in our model: an attribute's status object and
its policy for that attribute's proof service. Ack Policies are
required to be uniform for all users, which ensures that they
are independent of all objects.
Ack Policy systems are designed to prevent inferences
from an attribute's policy to an attribute's status for attributes
which are sensitive. So, let us assume an appropriate
set of inference components in order to prove that Ack
Policy systems are safe relative to that form of inference.
As we described earlier, each attribute status object should
be in the same inference component with the policy which
protects that attribute's proof service, and the Ack policy
for each attribute should be in its own inference component.
The Ack Policy system also assumes that different attributes
are independent of each other. As such, each attribute status
object should be in a different inference group.
This set of inference components excludes all other possible
types of inferences. The set of sensitive objects is the
set of attribute status objects whose value is true. Due to
Theorem 1, we know then that no opponent will be able to
gain any information based on objects in different inference
components. So the only potential source of inference for
whether or not a given attribute's status object, a
t
, has a
value of true or f alse is the policy protecting the attribute
proof service, s
t
.
However, we know that the same policy, P , protects both
of these objects. As such unauthorized inference between
them is impossible without satisfying P .
2
Thus, the odds
for a
t
do not change. Therefore, the Ack Policy system is
secure against inferences from an attribute's policy to its
attribute status.
POLICY DATABASE
We propose a new trust negotiation system designed to
be safe under the definition we proposed, but to also allow
the users who have sensitive attributes complete freedom to
determine their own policies. It also does not rely on any
particular strategy being used. Potentially, a combination of
strategies could even be used so long as the strategy chosen
is not in any way correlated to the attributes possessed.
This system is based on the observation that there is more
than one way to deal with a correlation. A simple ideal system
which prevents the inference from policies to attribute
possession information is to have each user draw a random
policy. This system obviously does not allow users the freedom
to create their own policies. Instead we propose a system
which looks like the policies are random even though
they are not.
This system is similar to the existing trust negotiation
systems except for the addition of a new element: the policy
database. The policy database is a database run by a trusted
third party which collects anonymized information about the
policies which are in use. In the policy database system, a
2
Except that one of these is a policy mapping object which
is being protected by a policy. As such, we have to keep
in mind that there exists a possibility that the opponent
could gain information about the policy without satisfying
it. Specifically, the opponent can figure out what attributes
do not satisfy it by proving that he possesses those
attributes. However, in an Ack Policy system, the policy
protecting an attribute proof object of an attribute which a
user does not hold is always f alse. No opponent can distinguish
between two policies which they cannot satisfy since
all they know is that they have failed to satisfy them. And
we are unconcerned with policies which they have satisfied.
Thus, we know that the opponent cannot gain any useful information
about the policies which they have not satisfied,
and hence cannot beat the a priori odds for those policies.
40
user who has a given sensitive attribute chooses their own
policy and submits it anonymously to the policy database
for that attribute. The policy database uses pseudonymous
certificates to verify that users who submit policies actually
have the attribute, in a manner that will be discussed later
in section 3.2. Then users who do not have the attribute
will pull policies at random from the database to use as
their own. The contents of the policy database are public,
so any user who wishes to can draw a random policy from
the database.
In our system, each user uses a single policy to protect all
the information objects associated with an attribute. They
neither acknowledge that they have the attribute nor prove
that they do until the policy has been satisfied. This means
that users are allowed to have policies which protect attributes
which they do not hold. The policy in our system
may be seen as the combination of the Ack policy and a
traditional access control policy for attribute proofs.
The goal of this system is to ensure that the policy is in
a separate inference component from the attribute status
object, thus guaranteeing that inferences between policies
and attribute status objects cannot be made.
This system is workable because of the following.
We
know that policies cannot require a lack of an attribute, thus
users who do not have a given attribute will never suffer from
their policy for that attribute being too strong. Changes
in the policy which protects an attribute that they do not
have may vary the length of the trust negotiation, but it
will never cause them to be unable to complete transactions
which they would otherwise be able to complete. Also, we
deal only with possession sensitive attributes. We do not
deal with attributes where it is at all sensitive to lack them.
As such, users who do not have the attribute cannot have
their policies be too weak. Since there is no penalty for those
users for their policies being either too weak or too strong,
they can have whatever policy is most helpful for helping
disguise the users who do possess the attribute.
This also means that users who do not have the attribute
do not need to trust the policy database since no policy
which the database gives them would be unacceptable to
them. Users who have the attribute, however, do need to
trust that the policy database will actually randomly distribute
policies to help camouflage their policies. They do
not, however, need to trust the policy database to act appropriately
with their sensitive information because all information
is anonymized.
3.1
Safety of the Approach of Policy Databases
Let us describe the Policy Database system in terms of
our model. Again the opponent and the sets of principals,
attributes, configurations, and policies need no special comment
.
Because we only have policies protecting the services
and attribute status objects, the set of protected objects
, O = S A. Also, each attribute proving service and
attribute status object are protected by the same policy.
t T , k K, q
a
t
(k) = q
s
t
(k).
This system is only designed to deal with inferences from
policies to attribute possession, so we assume that every attribute
status object is in a different inference component.
If the policies do actually appear to be completely random,
then policies and attribute status objects should be in separate
inference components as well.
The obvious question is whether Policy Database systems
actually guarantee that this occurs. The answer is that they
do not guarantee it with any finite number of users due to
the distribution of policies being unlikely to be absolutely,
precisely the same. This is largely due to a combination of
rounding issues and the odds being against things coming
out precisely evenly distributed. However, as the number
of users in the system approaches infinity, the system approaches
this condition.
In an ideal system, the distribution of policies would be
completely random. If an opponent observes that some number
of principals had a given policy for some attribute, this
would give them no information about whether or not any
of those users had the attribute. However, in the Policy
Database system, every policy which is held is known to be
held by at least one user who has the attribute. As such, we
need to worry about how even the distributions of different
policies are.
We can describe and quantify the difference which exists
between a real implementation of our system and the ideal.
There are two reasons for a difference to exist. The first is
difference due to distributions being discrete. For example,
let us say that there are five users in our system, two of
which have some attribute and three who do not. Let us
also say that the two users with the attribute each have
different policies. For the distributions to be identical, each
of those policies would need to be selected by one and a
half of the remaining three users. This, obviously, cannot
happen. We refer to this difference as rounding error.
The second is difference due to the natural unevenness of
random selection.
The distributions tend towards evenness
as the number of samples increases, but with any finite
number of users, the distributions are quite likely to vary
some from the ideal.
These differences can both be quantified the same way:
as a difference between the expected number of principals
who have a policy and the actual number. If the opponent
knows that one half of the principals have an attribute and
one half do not, and they observe that among four users,
there are two policies, one of which is held by three users
and the other by one user, then they can know that the user
with the unique policy holds the attribute. In general, any
time the number of users who share a policy is less than the
expectation, it is more likely that a user who has that policy
also has the attribute. Information is leaked when there is
a difference between the expected number of principals who
have a policy and the actual number of principals who have
that policy in proportion to the ratio between them.
Theorem 2. The limit of the difference between the expected
number of principals who have a policy and the actual
number of principals who have the policy as the number of
users goes to infinity is 0.
The proof of Theorem 2 can be found in Appendix B. The
intuition behind it is that as the number of samples grows
very large, the actual distribution approaches the ideal distribution
and the rounding errors shrink towards zero.
3.2
Attacks and Countermeasures
Until now, we have only proven things about a system
which is assumed to be in some instantaneous unchanging
state. In the real world we have to deal with issues related
to how policies change over time and multiple interactions.
41
Therefore, we also want the policy which a given user randomly
selects from the database to be persistent. Otherwise
an adversary would simply make multiple requests to the
same user over time and see if the policy changed. If it did,
especially if it changed erratically, it would indicate that the
user was repeatedly drawing random policies. Instead, the
user should hold some value which designates which policy
the user has.
An obvious answer would be to have the user hold onto
the policy itself, but this would open the user up to a new attack
. If users lacking a given attribute simply grabbed onto
a policy and never changed it, this itself could be a tell. If
there were some event which occurred which made having a
given attribute suddenly more sensitive than it used to be,
then rational users who have the attribute would increase
the stringency of their policies.
For example, if a country
undertook an action which was politically unpopular on
a global scale, holders of passports issued by that country
would likely consider that more sensitive information now
and would increase their policies appropriately. The result
would then be that the average policy for people who had
cached a previously fetched policy would then be less stringent
than those who were making their own policies.
Instead of a permanent policy, it would be more sensible
for a principal to receive a cookie which could get it the
policy from a particular principal so that when principals
who posses the attribute changed their policies, principals
who do not possess it would too.
We also need to guard against stacking the deck. Obviously
we can restrict the database to users who actually have
the attribute by requiring the presentation of a pseudonymous
certificate [6, 7, 8, 9, 10, 18] which proves that they
have the attribute. However, we also need to assure that a
legitimate attribute holder cannot submit multiple policies
in order to skew the set of policies. To this end, we require
that each policy be submitted initially with a one-time-show
pseudonymous credential [8]. The attribute authorities can
be restricted so that they will only issue each user a single
one-time-show pseudonymous credential for each Policy
Database use. Then we can accept the policy, knowing it to
come from a unique user who holds the attribute, and issue
them a secret key which they can later use to verify that
they were the submitter of a given policy and to replace it
with an updated policy.
This does not prevent a user who has the attribute from
submitting a single false policy, perhaps one which is distinctly
different from normal policies. The result would be
that users who draw that policy would be known to not have
the attribute. However, under the assumptions of our system
, not having the attribute is not sensitive, so this does
not compromise safety.
3.3
Limitations
We assume that for the attribute being protected, it is not
significantly sensitive to lack the attribute. This assumption
means that our system likely cannot be used in practice to
protect all attributes. Most notably it fails when lacking an
attribute implies having or being highly likely to have some
other attribute. For example, not having a valid passport
probably means that you are a permanent resident of the
country you are currently in (although users could be an
illegal immigrants or citizens of a defunct nation).
It also fails when the lack of an attribute is more sensitive
than having it. For instance, few people are going to wish
to prevent people from knowing that they have graduated
from high school, but many would consider their lack of a
high school graduation attribute to be sensitive. However,
we argue that no system can adequately handle such a case
because those who do have the attribute would likely be
unwilling to accept any system which would result in them
having to not disclose the attribute when it was useful for
them to do so. And if they still easily disclose their attribute,
then it becomes impossible for those without to disguise
their lack.
Similarly to the Ack Policy system, policy databases also
do not generally handle any form of probabilistic inference
rule between attributes. The existence of such a rule would
likely imply certain relationships between policies which most
users would enforce. If the possession of a city library card
suggested with strong probability that the user was a city
resident, then perhaps all users who have both would have
a policy protecting their library card which is stricter than
the policy protecting their city residency. However, as there
is variety in the policies of individuals, a user could pick a
random pair of policies which did not have this property.
That would then be a sure tell that he did not actually have
both of those attributes.
Another drawback of the system is that it requires a policy
database service be available on-line.
This decreases
the decentralized nature of trust negotiation. However, our
approach is still less centralized than Ack Policies, which
require that users cooperate to determine a universally accepted
Ack policy. And this centralization may be able to be
decreased by decentralizing the database itself. Although we
discuss the database as if it were a single monolithic entity,
it could be made of a number of different entities acting
together. The only requirement is that it accepts policies
from unique users who have the attribute and distributes
them randomly.
RELATED WORK
The framework of automated trust negotiation was first
proposed by Winsborough et al.
[24].
Since then, great
efforts have been put forward to address challenges in a variety
of aspects of trust negotiation.
An introduction to
trust negotiation and related trust management issues can
be found in [25]. As described in detail there, a number of
trust negotiation systems and supporting middleware have
been proposed and/or implemented in a variety of contexts
(e.g., [3, 4, 11, 12, 14, 17, 19]). Information leakage during
trust negotiation is studied in [13, 5, 15, 20, 21, 22,
23]. The work by Winsborough and Li has been discussed
in detail in previous sections. Next, we discuss several other
approaches.
In [20], non-response is proposed as a way to protect
possession-sensitive attributes.
The basic idea is to have
Alice, the owner of a sensitive attribute, act as if she does
not have the attribute. Only later when the other party
accidentally satisfies her policy for that attribute will Alice
disclose that attribute. This approach is easy to deploy in
trust negotiation. But clearly it will often cause a potentially
successful negotiation to fail because of Alice's conservative
response.
Yu and Winslett [26] introduce a technique called policy
migration to mitigate the problem of unauthorized inference.
In policy migration, Alice dynamically integrates her poli-42
cies for sensitive attributes with those of other attributes, so
that she does not need to explicitly disclose policies for sensitive
attributes. Meanwhile, policy migration makes sure
that "migrated" policies are logically equivalent to original
policies, and thus guarantees the success of the negotiation
whenever possible. On the other hand, policy migration is
not a universal solution, in the sense that it may not be applicable
to all the possible configurations of a negotiation.
Further, it is subject to a variety of attacks. In other words,
it only seeks to make unauthorized inference harder instead
of preventing it completely.
Most existing trust negotiation frameworks [16, 17, 28]
assume that the appropriate access control policies can be
shown to Bob when he requests access to Alice's resource.
However, realistic access control policies also tend to contain
sensitive information, because the details of Alice's policy
for the disclosure of a credential C tends to give hints about
C's contents. More generally, a company's internal and external
policies are part of its corporate assets, and it will
not wish to indiscriminately broadcast its policies in their
entirety. Several schemes have been proposed to protect the
disclosure of sensitive policies. In [4], Bonatti and Samarati
suggests dividing a policy into two parts prerequisite rules
and requisite rules. The constraints in a requisite rule will
not be disclosed until those in prerequisite rules are satisfied.
In [19], Seamons et al. proposed organizing a policy into a
directed graph so that constraints in a policy can be disclosed
gradually. In [26], access control policies are treated
as first-class resources, thus can be protected in the same
manner as services and credentials.
Recently, much work has been done on mutual authentication
and authorization through the use of cryptographic
techniques that offer improved privacy guarantees. For example
, Balfanz et al. [1] designed a secret-handshake scheme
where two parties reveal their memberships in a group to
each other if and only if they belong to the same group. Li
et al. [15] proposed a mutual signature verification scheme
to solve the problem of cyclic policy interdependency in trust
negotiation. Under their scheme, Alice can see the content
of Bob's credential signed by a certification authority CA
only if she herself has a valid certificate also signed by CA
and containing the content she sent to Bob earlier. A similar
idea was independently explored by researchers [5, 13]
to handle more complex access control policies. Note that
approaches based on cryptographic techniques usually impose
more constraints on access control policies. Therefore,
policy databases are complementary to the above work.
CONCLUSION AND FUTURE WORK
In this paper, we have proposed a general framework for
safety in automated trust negotiation. The framework is
based strictly on information gain, instead of on communication
. It thus more directly reflects the essence of safe information
flow in trust negotiation. We have also shown that
Ack policy systems are safe under our framework. Based
on the framework, we have presented policy databases, a
new, safe trust negotiation system. Compared with existing
systems, policy databases do not introduce extra layers of
policies or other complications to the negotiation between
users. Further, policy databases preserve user's autonomy
in defining their own policies instead of imposing uniform
policies across all users. Therefore they are more flexible
and easier to deploy than other systems.
Further, we have discussed a number of practical issues
which would be involved in implementing our system. In
the future, we plan to address how our system can be used
in the presence of delegated credentials. And we plan to
attempt to broaden the system to account for probabilistic
inferences rules which are publicly known.
Acknowledgments This research was sponsored by NSF
through IIS CyberTrust grant number 0430166 (NCSU). We
also thank anonymous reviewers for their helpful comments.
REFERENCES
[1] D. Balfanz, G. Durfee, N. Shankar, D. Smetters, J. Staddon,
and H. Wong. Secret Handshakes from Pairing-Based Key
Agreements. In IEEE Symposium on Security and Privacy,
Berkeley, CA, May 2003.
[2] M. Blaze, J. Feigenbaum, J. Ioannidis, and A. Keromytis. The
KeyNote Trust Management System Version 2. In Internet
Draft RFC 2704, September 1999.
[3] M. Blaze, J. Feigenbaum, and A. D. Keromytis. KeyNote:
Trust Management for Public-Key Infrastructures. In Security
Protocols Workshop, Cambridge, UK, 1998.
[4] P. Bonatti and P. Samarati. Regulating Service Access and
Information Release on the Web. In Conference on Computer
and Communications Security, Athens, November 2000.
[5] R.W. Bradshaw, J.E. Holt, and K.E. Seamons. Concealing
Complex Policies in Hidden Credentials. In ACM Conference
on Computer and Communications Security, Washington,
DC, October 2004.
[6] S. Brands. Rethinking Public Key Infrastructures and Digital
Certificates: Building in Privacy. The MIT Press, 2000.
[7] J. Camenisch and E.V. Herreweghen. Design and
Implementation of the Idemix Anonymous Credential System.
In ACM Conference on Computer and Communications
Security, Washington D.C., November 2002.
[8] J. Camenisch and A. Lysyanskaya. Efficient Non-Transferable
Anonymous Multi-Show Credential System with Optional
Anonymity Revocation. In EUROCRYPT 2001, volume 2045
of Lecture Notes in Computer Science. Springer, 2001.
[9] D. Chaum. Security without Identification: Transactions
Systems to Make Big Brother Obsolete. Communications of
the ACM, 24(2), 1985.
[10] I.B. Damg
ard. Payment Systems and Credential Mechanism
with Provable Security Against Abuse by Individuals. In
CRYPTO'88, volume 403 of Lecture Notes in Computer
Science. Springer, 1990.
[11] A. Herzberg, J. Mihaeli, Y. Mass, D. Naor, and Y. Ravid.
Access Control Meets Public Key Infrastructure, Or: Assigning
Roles to Strangers. In IEEE Symposium on Security and
Privacy, Oakland, CA, May 2000.
[12] A. Hess, J. Jacobson, H. Mills, R. Wamsley, K. Seamons, and
B. Smith. Advanced Client/Server Authentication in TLS. In
Network and Distributed System Security Symposium, San
Diego, CA, February 2002.
[13] J. Holt, R. bradshaw, K.E. Seamons, and H. Orman. Hidden
Credentials. In ACM Workshop on Privacy in the Electronic
Society, Washington, DC, October 2003.
[14] W. Johnson, S. Mudumbai, and M. Thompson. Authorization
and Attribute Certificates for Widely Distributed Access
Control. In IEEE International Workshop on Enabling
Technologies: Infrastructure for Collaborative Enterprises,
1998.
[15] N. Li, W. Du, and D. Boneh. Oblivious Signature-Based
Envelope. In ACM Symposium on Principles of Distributed
Computing, New York City, NY, July 2003.
[16] N. Li, J.C. Mitchell, and W. Winsborough. Design of a
Role-based Trust-management Framework. In IEEE
Symposium on Security and Privacy, Berkeley, California,
May 2002.
[17] N. Li, W. Winsborough, and J.C. Mitchell. Distributed
Credential Chain Discovery in Trust Management. Journal of
Computer Security, 11(1), February 2003.
[18] A. Lysyanskaya, R. Rivest, A. Sahai, and S. Wolf. Pseudonym
Systems. In Selected Areas in Cryptography, 1999, volume
1758 of Lecture Notes in Computer Science. Springer, 2000.
[19] K. Seamons, M. Winslett, and T. Yu. Limiting the Disclosure
of Access Control Policies during Automated Trust
43
Negotiation. In Network and Distributed System Security
Symposium, San Diego, CA, February 2001.
[20] K. Seamons, M. Winslett, T. Yu, L. Yu, and R. Jarvis.
Protecting Privacy during On-line Trust Negotiation. In 2nd
Workshop on Privacy Enhancing Technologies, San Francisco,
CA, April 2002.
[21] W. Winsborough and N. Li. Protecting Sensitive Attributes in
Automated Trust Negotiation. In ACM Workshop on Privacy
in the Electronic Society, Washington, DC, November 2002.
[22] W. Winsborough and N. Li. Towards Practical Automated
Trust Negotiation. In 3rd International Workshop on Policies
for Distributed Systems and Networks, Monterey, California,
June 2002.
[23] W. Winsborough and N. Li. Safety in Automated Trust
Negotiation. In IEEE Symposium on Security and Privacy,
Oakland, CA, May 2004.
[24] W. Winsborough, K. Seamons, and V. Jones. Automated Trust
Negotiation. In DARPA Information Survivability Conference
and Exposition, Hilton Head Island, SC, January 2000.
[25] M. Winslett, T. Yu, K.E. Seamons, A. Hess, J. Jarvis,
B. Smith, and L. Yu. Negotiating Trust on the Web. IEEE
Internet Computing, special issue on trust management, 6(6),
November 2002.
[26] T. Yu and M. Winslett. A Unified Scheme for Resource
Protection in Automated Trust Negotiation. In IEEE
Symposium on Security and Privacy, Oakland, CA, May 2003.
[27] T. Yu and M. Winslett. Policy Migration for Sensitive
Credentials in Trust Negotiation. In ACM Workshop on
Privacy in the Electronic Society, Washington, DC, October
2003.
[28] T. Yu, M. Winslett, and K. Seamons. Supporting Structured
Credentials and Sensitive Policies through Interoperable
Strategies in Automated Trust Negotiation. ACM Transactions
on Information and System Security, 6(1), February 2003.
APPENDIX
A.
PROOF OF THEOREM 1
Our goal is to prove the following theorem:
There exists no opponent which can beat the a
priori odds of guessing the value of an object, o
given only information about objects which are
not in the same inference component as o, over
all principals not in M and whose policy for o M
cannot satisfy, over all random tapes, and over all
mappings of public key values to principals.
Now it follows that if the opponent can beat the a priori
odds of guessing the value of an object, o, then the opponent
can beat the a priori odds of guessing the parity of o. Hence,
if no opponent can beat the a priori odds of guessing the
parity of an object, then none can beat the odds of guessing
the value of the object.
Lemma 1. There exists no opponent which can beat the
a priori odds of guessing the parity of an object, o given
only information about objects which are not in the same
inference component as o, over all principals not in M and
whose policy for o M cannot satisfy, over all random tapes,
and over all mappings of public key values to principals.
To prove this, we begin with an assumption that there
exists some tactic which can successfully guess the parity of
o with odds better than the a priori odds for at least some
public key mappings. We are going to prove that any such
tactic cannot beat the a priori odds on average across all
mappings because there must be more mappings where it
fails to beat the a priori odds than where it beats them.
Just to be clear, the tactic is allowed to interact with
principals whose policy for o it can satisfy. It just does not
get to guess about the value of o for those principals, as it
is entitled to beat the a priori odds for them. Hence, doing
so is not considered a leakage in the system.
Because the tactic is a deterministic Turing-equivalent
computational machine, when it outputs its final guesses,
it must output them in some order. We will define n to
be the number of users, |K|. We will number the series of
principals k
1
, k
2
, ..., k
n
. Without loss of generality, we can
assume that every principal's strategy's random tape has
some fixed value, resulting in them behaving in a strictly
deterministic manner. Therefore, as the tactic and strategies
are deterministic, the only remaining variable is the
mapping of public keys to principals.
Next we will fix the sequence of public keys.
Because
public keys are randomly chosen to begin with, and we are
varying over the set of all public-key to user mappings, we
can do this without loss of generality. The order in which
guesses are made must in some way depend only on the a
priori knowledge, the public keys, and the communications
which the tactic has with the strategies. So, if all of these
things are kept constant, the guesses will not change.
Let us suppose that a fraction h of the population whose
policy for o has not been satisfied has one parity value, and
a fraction 1 - h of the population has the other. Without
loss of generality, we assume that h 1 - h. We determine
h by calculating the relative a priori probabilities given the
distribution of the values of the object.
The a priori probability of successfully guessing which parity
a given user's object has is h. Now, if there exists some
order of interaction, i which beats the a priori odds, then
its number of correct guesses must be expressible as hn +
for some > 0.
We can break the set of users whose policies for o M cannot
meet down into a group of sets according to the values
of the objects which are in inference components other than
the one which contains o. We will define a set of sets, V G
such that vg V G is a set of users all of which have the
same values for all objects in all inference components other
than the one which contains o.
Now, let us consider the possibility of rearranging the public
keys of members of this group. Because the strategies
in use are defined to be deterministic with respect to the
policies governing the attributes which distinguish the two
configurations and because the opponent is defined to be
deterministic: it follows that if we were to rearrange user's
public keys from the original mapping to create a new mapping
, the communication would be the same in both. Since
the communication would be the same, it follows that the
tactic would make the same guesses relative to the order of
users because it is a deterministic machine and must produce
the same output given the same input, the end result
of which is that switching two users both of whom are members
of the same value group will result in the guesses of the
parity of those two users switching as well.
We can then consider the set of all arrangements of public
keys formed by switching principals around within their
value groups, which we shall call I. So the question at hand,
then, is whether or not the expected value of across all
members of I is positive. If we can demonstrate that it is
not, then no successful opponent can exist.
Here we introduce another lemma. Proof of this lemma is
now sufficient to establish our earlier lemma.
Lemma 2. The expected value of across all public key
mappings is less than or equal to zero.
44
If we have some quantity of extra correct guesses, , for
some public key mapping i, then these guesses must be distributed
over some set of value groups. If is to be positive
on average, then at least some value groups must average
a number of correct guesses above the a priori probability
over all arrangements in I.
Let us assume that we have one such group vg. Because
the distributions of values of items in other inference components
are defined to be precisely independent of o, we can
know that in each group, there must be a fraction h of the
members which have one parity and 1 - h which have the
other. So, in vg there will be x = h|vg| principals with the
first parity and y = (1 - h)|vg| principals with the second,
and the a priori expected number of correct guesses would
be x.
If, for some mapping, i, the tactic is successful, then there
must be some number of correct guesses x + where > 0.
We also know that y simply because the tactic is limited
in total correct guesses to |vg| = x + y. As the number of
correct guesses is x + , it must follow that the number of
incorrect guesses is y - .
Further, we need to note that the tactic must make some
quantity of first parity guesses and some quantity of second
parity guesses. Obviously, these quantities need to add up
to |vg|, but need not match up with x and y. Every extra
first parity or second parity guess guarantees at least one
mistake, but even with several mistakes, it is quite possible
to beat the a priori odds for some arrangements. So we
define x + c to be the number of first parity guesses and
y - c to be the number of second parity guesses.
Now, we know that each increase of one in |c| guarantees
at least one wrong guess, so we have a bound of + |c| y.
Further, we know that since c is fixed (as it is not dependent
on the arrangement, only the guesses which are unchanging
), the only way to gain a wrong guess is to swap a first
parity principal with a second parity principal, which must
necessarily create two wrong guesses. So we can quantify
the number of wrong first parity guesses and the number of
wrong second parity guesses using the terms we have set up.
Specifically, there must be
1
2
(y - + c) incorrect first parity
guesses, and
1
2
(y - - c) incorrect second parity guesses.
Now we can determine the number of arrangements of
principals which will create x + correct guesses. Specifically
, we look at the total number of principals which are
first parity and choose a way to arrange them to match up
with incorrect second parity guesses and we look at the total
number of principals which are second parity and choose
a way to arrange them to match up with incorrect first
parity guesses. Then we multiply that by the number of
permutations of first parity principals and the number of
permutations of second parity principals. And we arrive at
`
x
1
2
(y--c)
`
y
1
2
(y-+c)
x!y!.
Now, similarly, we can calculate the number of arrangements
which will result in x - correct answers. And if
for all there are at least as many arrangements which
produce x - correct answers as produce x + of them
then the average of cannot exceed 0. Now, if there are
x - correct answers, then there must be y + incorrect
ones. And we can use the same reasoning to establish that
there must be
1
2
(y + + c) incorrect first parity guesses
and
1
2
(y + - c) incorrect second parity guesses, and hence
`
x
1
2
(y+-c)
`
y
1
2
(y++c)
x!y! arrangements which result in x correct
guesses.
So if we can prove that this is no less
than the previous quantity then our proof will be complete.
`
x
1
2
(y--c)
`
y
1
2
(y-+c)
x!y! `
x
1
2
(y+-c)
`
y
1
2
(y++c)
x!y!
`
x
1
2
(y--c)
`
y
1
2
(y-+c)
`
x
1
2
(y+-c)
`
y
1
2
(y++c)
x!
(
1
2
(y--c))!(x-1
2
(y--c))!
y!
(
1
2
(y-+c))!(y-1
2
(y-+c))!
x!
(
1
2
(y+-c))!(x-1
2
(y+-c))!
y!
(
1
2
(y++c))!(y-1
2
(y++c))!
1
(
1
2
(y--c))!(x-1
2
(y--c))!
1
(
1
2
(y-+c))!(
1
2
(y+-c))!
1
(
1
2
(y+-c))!(x-1
2
(y+-c))!
1
(
1
2
(y++c))!(
1
2
(y--c))!
1
(x-1
2
(y--c))!(
1
2
(y-+c))!
1
(x-1
2
(y+-c))!(
1
2
(y++c))!
(x 1
2
(y - - c))!(
1
2
(y - + c))! (x 1
2
(y + - c))!(
1
2
(y +
+ c))!
(x-1
2
(y--c))!
(x-1
2
(y+-c))!
(
1
2
(y++c))!
(
1
2
(y-+c))!
(x-1
2
(y--c))!
(x-1
2
(y--c)-)!
(
1
2
(y++c))!
(
1
2
(y++c)-)!
We define a function f (a, k) = a!/(a-k)!, i.e. the product
starting from a going down k integers. And obviously a
b f (a, k) f (b, k), b k 0.
Then we can rewrite the last inequality as f (x 1
2
(y
- c), ) f (
1
2
(y + + c), ), which, noting that 0 and
y + |c| y - c y + c + 2
1
2
(y + c + ) ,
is implied by x-1
2
(y - -c)
1
2
(y + +c) x-1
2
y
1
2
y
x y h|vg| (1 - h)|vg| h (1 - h) which we know
to be true from our assumption at the start of the proof.
So we have proven lemma 2, and this completes the proof.
B.
PROOF OF THEOREM 2
We define n to be the number of users, |K|. Because we
assume that this system is in a fixed state, every user k is in
some configuration g
k
. Now let us examine some particular
attribute, t. We know that a fraction h of users have that
attribute and 1 - h do not. Let us define a set of policies
L = {p|t T , k Kq
s
t
Q such that p = q
s
t
(k)}. We
also need to know the fraction of users who have each policy
in L. As the number of users grows towards infinity, the
number of possible policies stays finite, so multiple users
with the attribute will wind up sharing the same policy.
For every member l L, we define f
l
to be the fraction of
users with attribute t who have policy l. P
lL
f
l
= 1. We
assume that as n approaches infinity, f
l
approaches some
fixed quantity ^
f
l
for every l L. Essentially, what we are
assuming is that there is a fixed fraction of users with the
attribute who will chose any given policy. The particular
number will vary at any given time, but over time, we will
approach this fraction. We should then know that for some
particular policy l, the odds of a user without the attribute
drawing policy l are also f
l
because policies are handed out
with the same distribution that they are submitted.
The distribution which describes how many users we are
actually going to have with this policy is a binomial distribution
. The variance of a binomial distribution is
2
=
n(1-h)f
l
(1-f
l
). The difference between the actual and the
ideal is the square root of the variance divided by the expected
number of users who have a given policy, which is nf
l
.
Hence, the expected difference between our practical system
and the ideal system is
n(1-h)f
l
(1-f
l
)
nf
l
= q
(1-h)(1-f
l
)
nf
l
.
1 - h is a constant term, and f
l
will approach ^
f
l
, which is
a fixed quantity. So lim
ninf
(1-h)(1-f
l
)
nf
l
= 0, and we have
proven that our system approaches the ideal as the number
of users goes to infinity.
45 | Privacy;Trust Negotiation;Attribute-based Access Control |
151 | Probabilistic Author-Topic Models for Information Discovery | We propose a new unsupervised learning technique for extracting information from large text collections. We model documents as if they were generated by a two-stage stochastic process. Each author is represented by a probability distribution over topics, and each topic is represented as a probability distribution over words for that topic. The words in a multi-author paper are assumed to be the result of a mixture of each authors' topic mixture. The topic-word and author-topic distributions are learned from data in an unsupervised manner using a Markov chain Monte Carlo algorithm . We apply the methodology to a large corpus of 160,000 abstracts and 85,000 authors from the well-known CiteSeer digital library, and learn a model with 300 topics. We discuss in detail the interpretation of the results discovered by the system including specific topic and author models, ranking of authors by topic and topics by author, significant trends in the computer science literature between 1990 and 2002, parsing of abstracts by topics and authors and detection of unusual papers by specific authors. An online query interface to the model is also discussed that allows interactive exploration of author-topic models for corpora such as CiteSeer. | INTRODUCTION
With the advent of the Web and various specialized digital
libraries, the automatic extraction of useful information
from text has become an increasingly important research
area in data mining. In this paper we discuss a new algorithm that extracts both the topics expressed in large text
document collections and models how the authors of documents
use those topics. The methodology is illustrated using
a sample of 160,000 abstracts and 80,000 authors from the
well-known CiteSeer digital library of computer science research
papers (Lawrence, Giles, and Bollacker, 1999). The
algorithm uses a probabilistic model that represents topics
as probability distributions over words and documents
as being composed of multiple topics. A novel feature of
our model is the inclusion of author models, in which authors
are modeled as probability distributions over topics.
The author-topic models can be used to support a variety
of interactive and exploratory queries on the set of documents
and authors, including analysis of topic trends over
time, finding the authors who are most likely to write on a
given topic, and finding the most unusual paper written by
a given author. Bayesian unsupervised learning is used to
fit the model to a document collection.
Supervised learning techniques for automated categorization
of documents into known classes or topics has received
considerable attention in recent years (e.g., Yang, 1998).
For many document collections, however, neither predefined
topics nor labeled documents may be available. Furthermore
, there is considerable motivation to uncover hidden
topic structure in large corpora, particularly in rapidly changing
fields such as computer science and biology, where predefined
topic categories may not accurately reflect rapidly
evolving content.
Automatic extraction of topics from text, via unsupervised
learning, has been addressed in prior work using a
number of different approaches. One general approach is
to represent the high-dimensional term vectors in a lower-dimensional
space. Local regions in the lower-dimensional
space can then be associated with specific topics. For example
, the WEBSOM system (Lagus et al. 1999) uses nonlinear
dimensionality reduction via self-organizing maps to
represent term vectors in a two-dimensional layout. Linear
projection techniques, such as latent semantic indexing
(LSI), are also widely used (Berry, Dumais, and O' Brien,
1995). For example, Deerwester et al. (1990), while not
using the term "topics" per se, state:
Roughly speaking, these factors may be thought
of as artificial concepts; they represent extracted
common meaning components of many different
words and documents.
Research Track Paper
306
A somewhat different approach is to cluster the documents
into groups containing similar semantic content, using
any of a variety of well-known document clustering techniques
(e.g., Cutting et al., 1992; McCallum, Nigam, and
Ungar, 2000; Popescul et al., 2000). Each cluster of documents
can then be associated with a latent topic (e.g., as
represented by the mean term vector for documents in the
cluster). While clustering can provide useful broad information
about topics, clusters are inherently limited by the fact
that each document is (typically) only associated with one
cluster. This is often at odds with the multi-topic nature of
text documents in many contexts. In particular, combinations
of diverse topics within a single document are difficult
to represent. For example, this present paper contains at
least two significantly different topics: document modeling
and Bayesian estimation. For this reason, other representations
(such as those discussed below) that allow documents
to be composed of multiple topics generally provide better
models for sets of documents (e.g., better out of sample predictions
, Blei, Ng, and Jordan (2003)).
Hofmann (1999) introduced the aspect model (also referred
to as probabilistic LSI, or pLSI) as a probabilistic
alternative to projection and clustering methods. In pLSI,
topics are modeled as multinomial probability distributions
over words, and documents are assumed to be generated
by the activation of multiple topics. While the pLSI model
produced impressive results on a number of text document
problems such as information retrieval, the parameterization
of the model was susceptible to overfitting and did not provide
a straightforward way to make inferences about new
documents not seen in the training data. Blei, Ng, and
Jordan (2003) addressed these limitations by proposing a
more general Bayesian probabilistic topic model called latent
Dirichlet allocation (LDA). The parameters of the LDA
model (the topic-word and document-topic distributions)
are estimated using an approximation technique known as
variational EM, since standard estimation methods are intractable
. Griffiths and Steyvers (2004) showed how Gibbs
sampling, a Markov chain Monte Carlo technique, could be
applied in this model, and illustrated this approach using 11
years of abstract data from the Proceedings of the National
Academy of Sciences.
Our focus here is to extend the probabilistic topic models
to include authorship information. Joint author-topic
modeling has received little or no attention as far as we
are aware. The areas of stylometry, authorship attribution,
and forensic linguistics focus on the problem of identifying
what author wrote a given piece of text. For example,
Mosteller and Wallace (1964) used Bayesian techniques to
infer whether Hamilton or Madison was the more likely author
of disputed Federalist papers. More recent work of a
similar nature includes authorship analysis of a purported
poem by Shakespeare (Thisted and Efron, 1987), identifying
authors of software programs (Gray, Sallis, and MacDonell,
1997), and the use of techniques such as support vector machines
(Diederich et al., 2003) for author identification.
These author identification methods emphasize the use of
distinctive stylistic features (such as sentence length) that
characterize a specific author. In contrast, the models we
present here focus on extracting the general semantic content
of a document, rather than the stylistic details of how
it was written. For example, in our model we omit common
"stop" words since they are generally irrelevant to the topic
of the document--however, the distributions of stop words
can be quite useful in stylometry. While "topic" information
could be usefully combined with stylistic features for author
classification we do not pursue this idea in this particular
paper.
Graph-based and network-based models are also frequently
used as a basis for representation and analysis of relations
among scientific authors. For example, Newman (2001),
Mutschke (2003) and Erten et al. (2003) use methods from
bibliometrics, social networks, and graph theory to analyze
and visualize co-author and citation relations in the
scientific literature. Kautz, Selman, and Shah (1997) developed
the interactive ReferralWeb system for exploring
networks of computer scientists working in artificial intelligence
and information retrieval, and White and Smyth
(2003) used PageRank-style ranking algorithms to analyze
co-author graphs. In all of this work only the network con-nectivity
information is used--the text information from the
underlying documents is not used in modeling. Thus, while
the grouping of authors via these network models can implicitly
provide indications of latent topics, there is no explicit
representation of the topics in terms of the text content (the
words) of the documents.
The novelty of the work described in this paper lies in
the proposal of a probabilistic model that represents both
authors and topics, and the application of this model to a
large well-known document corpus in computer science. As
we will show later in the paper, the model provides a general
framework for exploration, discovery, and query-answering
in the context of the relationships of author and topics for
large document collections.
The outline of the paper is as follows: in Section 2 we describe
the author-topic model and outline how the parameters
of the model (the topic-word distributions and author-topic
distributions) can be learned from training data consisting
of documents with known authors. Section 3 illustrates
the application of the model to a large collection of
abstracts from the CiteSeer system, with examples of specific
topics and specific author models that are learned by
the algorithm. In Section 4 we illustrate a number of applications
of the model, including the characterization of topic
trends over time (which provides some interesting insights
on the direction of research in computer science), and the
characterization of which papers are most typical and least
typical for a given author. An online query interface to the
system is described in Section 5, allowing users to query the
model over the Web--an interesting feature of the model is
the coupling of Bayesian sampling and relational database
technology to answer queries in real-time. Section 6 contains
a brief discussion of future directions and concluding
comments.
AN OVERVIEW OF THE AUTHOR-TOPIC MODEL
The author-topic model reduces the process of writing a
scientific document to a simple series of probabilistic steps.
The model not only discovers what topics are expressed in a
document, but also which authors are associated with each
topic. To simplify the representation of documents, we use
a bag of words assumption that reduces each document to a
Research Track Paper
307
x
z
w
D
K
T
d
a
Given the set of
co-authors:
N
d
1. Choose an author
2. Choose a topic
given the author
3. Choose a word
given the topic
Figure 1: The graphical model for the author-topic
model using plate notation.
vector of counts, where each vector element corresponds to
the number of times a term appears in the document.
Each author is associated with a multinomial distribution
over topics. A document with multiple authors has a distribution
over topics that is a mixture of the distributions
associated with the authors. When generating a document,
an author is chosen at random for each individual word in
the document. This author picks a topic from his or her
multinomial distribution over topics, and then samples a
word from the multinomial distribution over words associated
with that topic. This process is repeated for all words
in the document.
In the model, the authors produce words from a set of
T
topics. When T is kept relatively small relative to the
number of authors and vocabulary size, the author-topic
model applies a form of dimensionality reduction to documents
; topics are learned which capture the variability in
word choice across a large set of documents and authors.
In our simulations, we use 300 topics (see Rosen-Zvi et al.
(2004) for an exploration of different numbers of topics).
Figure 1 illustrates the generative process with a graphical
model using plate notation. For readers not familiar
with plate notation, shaded and unshaded variables indicate
observed and latent variables respectively. An arrow
indicates a conditional dependency between variables and
plates (the boxes in the figure) indicate repeated sampling
with the number of repetitions given by the variable in the
bottom (see Buntine (1994) for an introduction). In the
author-topic model, observed variables not only include the
words w in a document but also the set of coauthors A
d
on
each document d. Currently, the model does not specify the
generative process of how authors choose to collaborate. Instead
, we assume the model is provided with the authorship
information on every document in the collection.
Each author (from a set of K authors) is associated with
a multinomial distribution over topics, represented by .
Each topic is associated with a multinomial distribution over
words, represented by . The multinomial distributions
and have a symmetric Dirichlet prior with hyperparame-ters
and (see Rosen-Zvi et al. (2004) for details). For
each word in the document, we sample an author x uni-formly
from A
d
, then sample a topic z from the multinomial
distribution associated with author x and sample a word
w
from a multinomial topic distribution associated with
topic z. This sampling process is repeated N times to form
document d.
2.2 Bayesian Estimation of the Model Parameters
The author-topic model includes two sets of unknown
parameters--the K author-topic distributions , and the T
topic distributions --as well as the latent variables corresponding
to the assignments of individual words to topics z
and authors x. The Expectation-Maximization (EM) algorithm
is a standard technique for estimating parameters in
models with latent variables, finding a mode of the posterior
distribution over parameters. However, when applied to
probabilistic topic models (Hofmann, 1999), this approach
is susceptible to local maxima and computationally inefficient
(see Blei, Ng, and Jordan, 2003). We pursue an alternative
parameter estimation strategy, outlined by Griffiths
and Steyvers (2004), using Gibbs sampling, a Markov chain
Monte Carlo algorithm to sample from the posterior distribution
over parameters. Instead of estimating the model
parameters directly, we evaluate the posterior distribution
on just x and z and then use the results to infer and .
For each word, the topic and author assignment are sam-pled
from:
P
(z
i
= j, x
i
= k|w
i
= m, z
-i
,
x
-i
)
C
W T
mj
+
m
C
W T
m j
+ V
C
AT
kj
+
j
C
AT
kj
+ T
(1)
where z
i
= j and x
i
= k represent the assignments of
the ith word in a document to topic j and author k respectively
, w
i
= m represents the observation that the ith word
is the mth word in the lexicon, and z
-i
,
x
-i
represent all
topic and author assignments not including the ith word.
Furthermore, C
W T
mj
is the number of times word m is assigned
to topic j, not including the current instance, and
C
AT
kj
is the number of times author k is assigned to topic j,
not including the current instance, and V is the size of the
lexicon.
During parameter estimation, the algorithm only needs to
keep track of a V T (word by topic) count matrix, and a
K
T (author by topic) count matrix, both of which can be
represented efficiently in sparse format. From these count
matrices, we can easily estimate the topic-word distributions
and author-topic distributions by:
mj
=
C
W T
mj
+
m
C
W T
m j
+ V
(2)
kj
=
C
AT
kj
+
j
C
AT
kj
+ T
(3)
where
mj
is the probability of using word m in topic j, and
kj
is the probability of using topic j by author k. These
values correspond to the predictive distributions over new
words w and new topics z conditioned on w and z.
We start the algorithm by assigning words to random topics
and authors (from the set of authors on the document).
Each Gibbs sample then constitutes applying Equation (1)
to every word token in the document collection. This sampling
process is repeated for I iterations. In this paper we
primarily focus on results based on a single sample so that
specific topics can be identified and interpreted--in tasks involving
prediction of words and authors one can average over
topics and use multiple samples when doing so (Rosen-Zvi
et al., 2004).
Research Track Paper
308
WORD
PROB.
WORD
PROB.
WORD
PROB.
WORD
PROB.
PATTERNS
0.1965
USER
0.3290
MAGNETIC
0.0155
METHODS
0.5319
PATTERN
0.1821
INTERFACE
0.1378
STARS
0.0145
METHOD
0.1403
MATCHING
0.1375
USERS
0.1060
SOLAR
0.0135
TECHNIQUES
0.0442
MATCH
0.0337
INTERFACES
0.0498
EMISSION
0.0127
DEVELOPED
0.0216
TEXT
0.0242
SYSTEM
0.0434
MASS
0.0125
APPLIED
0.0162
PRESENT
0.0207
INTERACTION
0.0296
OBSERVATIONS 0.0120
BASED
0.0153
MATCHES
0.0167
INTERACTIVE
0.0214
STAR
0.0118
APPROACHES
0.0133
PAPER
0.0126
USABILITY
0.0132
RAY
0.0112
COMPARE
0.0113
SHOW
0.0124
GRAPHICAL
0.0092
GALAXIES
0.0105
PRACTICAL
0.0112
APPROACH
0.0099
PROTOTYPE
0.0086
OBSERVED
0.0098
STANDARD
0.0102
AUTHOR
PROB.
AUTHOR
PROB.
AUTHOR
PROB.
AUTHOR
PROB.
Navarro_G
0.0133
Shneiderman_B
0.0051
Falcke_H
0.0140
Srinivasan_A
0.0018
Amir_A
0.0099
Rauterberg_M
0.0046
Linsky_J
0.0082
Mooney_R
0.0018
Gasieniec_L
0.0062
Harrison_M
0.0025
Butler_R
0.0077
Owren_B
0.0018
Baeza-Yates_R
0.0048
Winiwarter_W
0.0024
Knapp_G
0.0067
Warnow_T
0.0016
Baker_B
0.0042
Ardissono_L
0.0021
Bjorkman_K
0.0065
Fensel_D
0.0016
Arikawa_S
0.0041
Billsus_D
0.0019
Kundu_M
0.0060
Godsill_S
0.0014
Crochemore_M
0.0037
Catarci_T
0.0017
Christensen-D_J 0.0057
Saad_Y
0.0014
Rytter_W
0.0034
St_R
0.0017
Mursula_K
0.0054
Hansen_J
0.0013
Raffinot_M
0.0032
Picard_R
0.0016
Cranmer_S
0.0051
Zhang_Y
0.0013
Ukkonen_E
0.0032
Zukerman_I
0.0016
Nagar_N
0.0050
Dietterich_T
0.0013
WORD
PROB.
WORD
PROB.
WORD
PROB.
WORD
PROB.
DATA
0.1622
PROBABILISTIC 0.0869
RETRIEVAL
0.1208
QUERY
0.1406
MINING
0.0657
BAYESIAN
0.0791
INFORMATION
0.0613
QUERIES
0.0947
DISCOVERY
0.0408
PROBABILITY
0.0740
TEXT
0.0461
DATABASE
0.0932
ATTRIBUTES
0.0343
MODEL
0.0533
DOCUMENTS
0.0385
DATABASES
0.0468
ASSOCIATION
0.0328
MODELS
0.0466
INDEXING
0.0369
DATA
0.0426
LARGE
0.0279
PROBABILITIES 0.0308
DOCUMENT
0.0316
RELATIONAL
0.0384
DATABASES
0.0257
INFERENCE
0.0306
QUERY
0.0261
JOIN
0.0188
KNOWLEDGE
0.0175
CONDITIONAL
0.0274
CONTENT
0.0256
PROCESSING
0.0165
PATTERNS
0.0174
PRIOR
0.0273
SEARCH
0.0174
SOURCES
0.0114
ITEMS
0.0173
POSTERIOR
0.0228
RELEVANCE
0.0171
OPTIMIZATION
0.0110
AUTHOR
PROB.
AUTHOR
PROB.
AUTHOR
PROB.
AUTHOR
PROB.
Han_J
0.0164
Koller_D
0.0104
Oard_D
0.0097
Levy_A
0.0092
Zaki_M
0.0089
Heckerman_D
0.0079
Hawking_D
0.0065
Naughton_J
0.0078
Liu_B
0.0071
Ghahramani_Z
0.0060
Croft_W
0.0057
Suciu_D
0.0075
Cheung_D
0.0066
Friedman_N
0.0060
Jones_K
0.0053
Raschid_L
0.0075
Shim_K
0.0051
Myllymaki_P
0.0057
Schauble_P
0.0052
DeWitt_D
0.0062
Mannila_H
0.0049
Lukasiewicz_T
0.0054
Voorhees_E
0.0050
Widom_J
0.0058
Rastogi_R
0.0049
Geiger_D
0.0045
Callan_J
0.0046
Abiteboul_S
0.0057
Ganti_V
0.0048
Muller_P
0.0044
Fuhr_N
0.0042
Chu_W
0.0055
Toivonen_H
0.0043
Berger_J
0.0044
Smeaton_A
0.0042
Libkin_L
0.0054
Liu_H
0.0043
Xiang_Y
0.0042
Sanderson_M
0.0041
Kriegel_H
0.0054
TOPIC 29
TOPIC 58
TOPIC 298
TOPIC 139
TOPIC 52
TOPIC 95
TOPIC 293
TOPIC 68
Figure 2: Eight example topics extracted from the
CiteSeer database. Each is illustrated with the 10
most likely words and authors with corresponding
probabilities.
WORD
PROB.
WORD
PROB.
WORD
PROB.
WORD
PROB.
DATA
0.1468
PROBABILISTIC 0.0826
RETRIEVAL
0.1381
QUERY
0.1699
MINING
0.0631
BAYESIAN
0.0751
INFORMATION
0.0600
QUERIES
0.1209
DISCOVERY
0.0396
PROBABILITY
0.0628
INDEX
0.0529
JOIN
0.0258
ATTRIBUTES
0.0392
MODEL
0.0364
INDEXING
0.0469
DATA
0.0212
ASSOCIATION
0.0316
PROBABILITIES 0.0313
QUERY
0.0319
OPTIMIZATION
0.0171
RULES
0.0252
INFERENCE
0.0294
CONTENT
0.0299
PROCESSING
0.0162
PATTERNS
0.0210
MODELS
0.0273
BASED
0.0224
RELATIONAL
0.0131
LARGE
0.0207
CONDITIONAL
0.0262
SEARCH
0.0219
DATABASE
0.0128
ATTRIBUTE
0.0183
DISTRIBUTION
0.0261
RELEVANCE
0.0212
AGGREGATION 0.0117
DATABASES
0.0179
PRIOR
0.0259
SIMILARITY
0.0178
RESULT
0.0106
AUTHOR
PROB.
AUTHOR
PROB.
AUTHOR
PROB.
AUTHOR
PROB.
Han_J
0.0157
Koller_D
0.0109
Oard_D
0.0080
Naughton_J
0.0103
Zaki_M
0.0104
Heckerman_D
0.0079
Voorhees_E
0.0053
Suciu_D
0.0091
Liu_B
0.0080
Friedman_N
0.0076
Hawking_D
0.0053
Levy_A
0.0080
Cheung_D
0.0075
Ghahramani_Z
0.0060
Schauble_P
0.0051
DeWitt_D
0.0077
Hamilton_H
0.0058
Lukasiewicz_T
0.0053
Croft_W
0.0051
Wong_L
0.0071
Mannila_H
0.0056
Myllymaki_P
0.0053
Jones_K
0.0041
Ross_K
0.0067
Brin_S
0.0055
Poole_D
0.0050
Bruza_P
0.0041
Kriegel_H
0.0055
Ganti_V
0.0050
Xiang_Y
0.0048
Lee_D
0.0040
Mumick_I
0.0054
Liu_H
0.0050
vanderGaag_L
0.0047
Smeaton_A
0.0040
Raschid_L
0.0053
Toivonen_H
0.0049
Berger_J
0.0040
Callan_J
0.0039
Kossmann_D
0.0053
TOPIC 276
TOPIC 158
TOPIC 213
TOPIC 15
Figure 3: The four most similar topics to the topics
in the bottom row of Figure 2, obtained from a
different Markov chain run.
AUTHOR-TOPICS FOR CITESEER
Our collection of CiteSeer abstracts contains D = 162, 489
abstracts with K = 85, 465 authors. We preprocessed the
text by removing all punctuation and common stop words.
This led to a vocabulary size of V = 30, 799, and a total of
11, 685, 514 word tokens.
There is inevitably some noise in data of this form given
that many of the fields (paper title, author names, year, abstract
) were extracted automatically by CiteSeer from PDF
or postscript or other document formats. We chose the simple
convention of identifying authors by their first initial and
second name, e.g., A Einstein, given that multiple first initials
or fully spelled first names were only available for a relatively
small fraction of papers. This means of course that for
some very common names (e.g., J Wang or J Smith) there
will be multiple actual individuals represented by a single
name in the model. This is a known limitation of working
with this type of data (e.g., see Newman (2001) for further
discussion). There are algorithmic techniques that could
be used to automatically resolve these identity problems-however
, in this paper, we don't pursue these options and
instead for simplicity work with the first-initial/last-name
representation of individual authors.
In our simulations, the number of topics T was fixed at
300 and the smoothing parameters and (Figure 1) were
set at 0.16 and 0.01 respectively. We ran 5 independent
Gibbs sampling chains for 2000 iterations each. On a 2GHz
PC workstation, each iteration took 400 seconds, leading to
a total run time on the order of several days per chain.
3.2 Author-Topic and Topic-Word Models for
the CiteSeer Database
We now discuss the author-topic and topic-word distributions
learned from the CiteSeer data. Figure 2 illustrates
eight different topics (out of 300), obtained at the 2000th
iteration of a particular Gibbs sampler run.
Each table in Figure 2 shows the 10 words that are most
likely to be produced if that topic is activated, and the 10
authors who are most likely to have produced a word if it is
known to have come from that topic. The words associated
with each topic are quite intuitive and, indeed, quite precise
in the sense of conveying a semantic summary of a particular
field of research. The authors associated with each topic
are also quite representative--note that the top 10 authors
associated with a topic by the model are not necessarily the
most well-known authors in that area, but rather are the
authors who tend to produce the most words for that topic
(in the CiteSeer abstracts).
The first 3 topics at the top of Figure 2, topics #163, #87
and #20 show examples of 3 quite specific and precise topics
on string matching, human-computer interaction, and astronomy
respectively. The bottom four topics (#205, #209,
#289, and #10) are examples of topics with direct relevance
to data mining--namely data mining itself, probabilistic
learning, information retrieval, and database querying and
indexing. The model includes several other topics related
to data mining, such as predictive modeling and neural networks
, as well as topics that span the full range of research
areas encompassed by documents in CiteSeer. The full list is
available at http://www.datalab.uci.edu/author-topic.
Topic #273 (top right Figure 2) provides an example of a
Research Track Paper
309
topic that is not directly related to a specific research area.
A fraction of topics, perhaps 10 to 20%, are devoted to "non-research
-specific" topics, the "glue" that makes up our research
papers, including general terminology for describing
methods and experiments, funding acknowledgments and
parts of addresses(which inadvertently crept in to the abstracts
), and so forth.
We found that the topics obtained from different Gibbs
sampling runs were quite stable. For example, Figure 3
shows the 4 most similar topics to the topics in the bottom
row of Figure 2, but from a different run. There is
some variability in terms of ranking of specific words and
authors for each topic, and in the exact values of the associated
probabilities, but overall the topics match very closely.
APPLICATIONS OF THE AUTHOR-TOPIC MODEL TO CITESEER
Of the original 162,489 abstracts in our data set, estimated
years of publication were provided by CiteSeer for 130, 545 of
these abstracts. There is a steady (and well-known) increase
year by year in the number of online documents through the
1990's. From 1999 through 2002, however, the number of
documents for which the year is known drops off sharply-the
years 2001 and 2002 in particular are under-represented
in this set. This is due to fact that it is easier for CiteSeer to
determine the date of publication of older documents, e.g.,
by using citations to these documents.
We used the yearly data to analyze trends in topics over
time. Using the same 300 topic model described earlier, the
documents were partitioned by year, and for each year all
of the words were assigned to their most likely topic using
the model. The fraction of words assigned to each topic for
a given year was then calculated for each of the 300 topics
and for each year from 1990 to 2002.
These fractions provide interesting and useful indicators of
relative topic popularity in the research literature in recent
years. Figure 4 shows the results of plotting several different
topics. Each topic is indicated in the legend by the five
most probable words in the topic. The top left plot shows
a steady increase (roughly three-fold) in machine learning
and data mining topics. The top right plot shows a "tale of
two topics": an increase in information-retrieval coupled to
an apparent decrease in natural language processing.
On the second row, on the left we see a steady decrease in
two "classical" computer science topics, operating systems
and programming languages. On the right, however, we see
the reverse behavior, namely a corresponding substantial
growth in Web-related topics.
In the third row, the left plot illustrates trends within
database research: a decrease in the transaction and concurrency-related
topic, query-related research holding steady over time,
and a slow but steady increase in integration-related database
research. The plot on the right in the third row illustrates
the changing fortunes of security-related research--a decline
in the early 90's but then a seemingly dramatic upward trend
starting around 1995.
The lower left plot on the bottom row illustrates the somewhat
noisy trends of three topics that were "hot" in the
1990's: neural networks exhibits a steady decline since the
early 1990's (as machine learning has moved on to areas such
as support vector machines), genetic algorithms appears to
be relatively stable, and wavelets may have peaked in the
199498 time period.
Finally, as with any large data set there are always some
surprises in store. The final figure on the bottom right shows
two somewhat unexpected "topics". The first topic consists
entirely of French words (in fact the model discovered 3 such
French language topics ). The apparent peaking of French
words in the mid-1990s is likely to be an artifact of how CiteSeer
preprocesses data rather than any indication of French
research productivity. The lower curve corresponds to a
topic consisting of largely Greek letters, presumably from
more theoretically oriented papers--fans of theory may be
somewhat dismayed to see that there is an apparent steady
decline in the relative frequency of Greek letters in abstracts
since the mid-1990s!
The time-trend results above should be interpreted with
some caution. As mentioned earlier, the data for 2001 and
2002 are relatively sparse compared to earlier years. In addition
, the numbers are based on a rather skewed sample (online
documents obtained by the CiteSeer system for which
years are known). Furthermore, the fractions per year only
indicate the relative number of words assigned to a topic
by the model and make no direct assessment of the quality
or importance of a particular sub-area of computer science.
Nonetheless, despite these caveats, the results are quite informative
and indicate substantial shifts in research topics
within the field of computer science.
In terms of related work, Popescul et al. (2000) investi-gated
time trends in CiteSeer documents using a document
clustering approach. 31K documents were clustered into 15
clusters based on co-citation information while the text information
in the documents was not used. Our author-topic
model uses the opposite approach. In effect we use the text
information directly to discover topics and do not explic-itly
model the "author network" (although implicitly the
co-author connections are used by the model). A direct
quantitative comparison is difficult, but we can say that our
model with 300 topics appears to produce much more noticeable
and precise time-trends than the 15-cluster model.
4.2 Topics and Authors for New Documents
In many applications, we would like to quickly assess the
topic and author assignments for new documents not contained
in our subset of the CiteSeer collection. Because our
Monte Carlo algorithm requires significant processing time
for 160K documents, it would be computationally inefficient
to rerun the algorithm for every new document added to the
collection (even though from a Bayesian inference viewpoint
this is the optimal approach). Our strategy instead is to
apply an efficient Monte Carlo algorithm that runs only on
the word tokens in the new document, leading quickly to
likely assignments of words to authors and topics. We start
by assigning words randomly to co-authors and topics. We
then sample new assignments of words to topics and authors
by applying Equation 1 only to the word tokens in the new
document each time temporarily updating the count matrices
C
W T
and C
AT
. The resulting assignments of words to
authors and topics can be saved after a few iterations (10
iterations in our simulations).
Figure 5 shows an example of this type of inference. Abstracts
from two authors, B Scholkopf and A Darwiche were
combined together into 1 "pseudo-abstract" and the docu-Research
Track Paper
310
1990
1992
1994
1996
1998
2000
2002
1
2
3
4
5
6
7
8 x 10
-3
Year
Fraction of Words Assigned to Topic
114:regression-variance-estimator
-estimators-bias
153:classification-training-classifier
-classifiers-generalization
205:data-mining-attributes-discovery
-association
1990
1992
1994
1996
1998
2000
2002
2
3
4
5
6
7
8 x 10
-3
Year
Fraction of Words Assigned to Topic
280:language-semantic-natural
-linguistic-grammar
289:retrieval-text-documents
-information-document
1990
1992
1994
1996
1998
2000
2002
2
3
4
5
6
7
8
9
10
11 x 10
-3
Year
Fraction of Words Assigned to Topic
60:programming-language-concurrent
-languages-implementation
139:system-operating-file
-systems-kernel
1990
1992
1994
1996
1998
2000
2002
0
0.002
0.004
0.006
0.008
0.01
0.012
Year
Fraction of Words Assigned to Topic
7:web-user-world-wide-users
80:mobile-wireless-devices
-mobility-ad
275:multicast-multimedia-media
-delivery-applications
1990
1992
1994
1996
1998
2000
2002
1
2
3
4
5
6
7
8
9
10 x 10
-3
Year
Fraction of Words Assigned to Topic
10:query-queries-index-data-join
261:transaction-transactions
-concurrency-copy-copies
194:integration-view-views-data
-incremental
1990
1992
1994
1996
1998
2000
2002
1
2
3
4
5
6
7
8
9 x 10
-3
Year
Fraction of Words Assigned to Topic
120:security-secure-access-key-authentication
240:key-attack-encryption-hash-keys
1990
1992
1994
1996
1998
2000
2002
1
2
3
4
5
6
7 x 10
-3
Year
Fraction of Words Assigned to Topic
23:neural-networks-network-training-learning
35:wavelet-operator-operators-basis
-coefficients
242:genetic-evolutionary-evolution-population-ga
1990
1992
1994
1996
1998
2000
2002
0
0.002
0.004
0.006
0.008
0.01
0.012
Year
Fraction of Words Assigned to Topic
47:la-les-une-nous-est
157:gamma-delta-ff-omega-oe
Figure 4: Topic trends for research topics in computer science.
Research Track Paper
311
[
AUTH1=Scholkopf_B
( 69%, 31%)]
[
AUTH2=Darwiche_A
( 72%, 28%)]
A
method
1
is described which like the
kernel
1
trick
1
in
support
1
vector
1
machines
1
SVMs
1
lets
us
generalize
distance
1
based
2
algorithms
to
operate
in
feature
1
spaces
usually
nonlinearly
related
to the
input
1
space
This is done by
identifying
a
class
of
kernels
1
which can be
represented
as
norm
1
based
2
distances
1
in
Hilbert
spaces
It
turns
1
out that
common
kernel
1
algorithms
such as
SVMs
1
and
kernel
1
PCA
1
are actually really
distance
1
based
2
algorithms
and can be
run
2
with that
class
of
kernels
1
too As well as
providing
1
a useful new
insight
1
into how these
algorithms
work
the
present
2
work
can
form
the
basis
1
for
conceiving
new
algorithms
This
paper
presents
2
a
comprehensive
approach
for
model
2
based
2
diagnosis
2
which
includes
proposals
for
characterizing
and
computing
2
preferred
2
diagnoses
2
assuming
that the
system
2
description
2
is
augmented
with a
system
2
structure
2
a
directed
2
graph
2
explicating the
interconnections
between
system
2
components
2
Specifically
we
first
introduce
the
notion
of a
consequence
2
which is a
syntactically
2
unconstrained
propositional
2
sentence
2
that
characterizes
all
consistency
2
based
2
diagnoses
2
and
show
2
that
standard
2
characterizations
of
diagnoses
2
such as
minimal
conflicts
1
correspond
to
syntactic
2
variations
1
on a
consequence
2
Second we
propose
a new
syntactic
2
variation
on the
consequence
2
known as
negation
2
normal
form
NNF and
discuss
its
merits
compared
to
standard
variations
Third we
introduce
a
basic
algorithm
2
for
computing
consequences
in NNF given a
structured
system
2
description
We
show
that if the
system
2
structure
2
does not contain
cycles
2
then there is always a
linear
size
2
consequence
2
in NNF
which can be
computed
in
linear
time
2
For
arbitrary
1
system
2
structures
2
we
show
a
precise
connection
between the
complexity
2
of
computing
2
consequences
and the
topology
of the
underlying
system
2
structure
2
Finally
we
present
2
an
algorithm
2
that
enumerates
2
the
preferred
2
diagnoses
2
characterized
by a
consequence
2
The
algorithm
2
is
shown
1
to take
linear
time
2
in the
size
2
of the
consequence
2
if the
preference
criterion
1
satisfies
some
general
conditions
Figure 5: Automated labeling of a pseudo-abstract from two authors by the model.
ment treated as if they had both written it. These two authors
work in relatively different but not entirely unrelated
sub-areas of computer science: Scholkopf in machine learning
and Darwiche in probabilistic reasoning. The document
is then parsed by the model. i.e., words are assigned to these
authors. We would hope that the author-topic model, conditioned
now on these two authors, can separate the combined
abstract into its component parts.
Figure 5 shows the results after the model has classified
each word according to the most likely author. Note that
the model only sees a bag of words and is not aware of the
word order that we see in the figure. For readers viewing
this in color, the more red a word is the more likely it is to
have been generated (according to the model) by Scholkopf
(and blue for Darwiche). For readers viewing the figure in
black and white, the superscript 1 indicates words classified
by the model for Scholkopf, and superscript 2 for Darwiche.
The results show that all of the significant content words
(such as kernel, support, vector, diagnoses, directed, graph)
are classified correctly. As we might expect most of the "er-rors"
are words (such as "based" or "criterion") that are not
specific to either authors' area of research. Were we to use
word order in the classification, and classify (for example)
whole sentences, the accuracy would increase further. As it
is, the model correctly classifies 69% of Scholkopf's words
and 72% of Darwiche's.
4.3 Detecting the Most Surprising and Least
Surprising Papers for an Author
In Tables 1 through 3 we used the model to score papers
attributed to three well-known researchers in computer science
(Christos Faloutsos, Michael Jordan, and Tom Mitchell).
For each document for each of these authors we calculate
a perplexity score. Perplexity is widely used in language
modeling to assess the predictive power of a model. It is a
measure of how surprising the words are from the model's
perspective, loosely equivalent to the effective branching factor
. Formally, the perplexity score of a new unobserved document
d that contains a set of words W
d
and conditioned
on a topic model for a specific author a is:
Perplexity(W
d
|a) = exp - log p(W
d
|a)
|W
d
|
where p(W
d
|a) is the probability assigned by the author
topic model to the words W
d
conditioned on the single author
a, and |W
d
| is the number of words in the document.
Even if the document was written by multiple authors we
evaluate the perplexity score relative to a single author in
order to judge perplexity relative to that individual.
Our goal here is not to evaluate the out-of-sample predictive
power of the model, but to explore the range of perplexity
scores that the model assigns to papers from specific
authors. Lower scores imply that the words w are less surprising
to the model (lower bounded by zero).In particular
we are interested in the abstracts that the model considers
most surprising (highest perplexity) and least surprising
(lowest perplexity)--in each table we list the 2 abstracts
with the highest perplexity scores, the median perplexity,
and the 2 abstracts with the lowest perplexity scores.
Table 1 for Christos Faloutsos shows that the two papers
with the highest perplexities have significantly higher perplexity
scores than the median and the two lowest perplexity
papers. The high perplexity papers are related to "query by
example" and the QBIC image database system, while the
low perplexity papers are on high-dimensional indexing. As
far as the topic model for Faloutsos is concerned, the indexing
papers are much more typical of his work than the query
by example papers.
Tables 2 and 3 provide interesting examples in that the
most perplexing papers (from the model's viewpoint) for
each author are papers that the author did not write at
all. As mentioned earlier, by combining all T Mitchell's and
M Jordan's together, the data set may contain authors who
are different from Tom Mitchell at CMU and Michael Jordan
at Berkeley. Thus, the highest perplexity paper for
T Mitchell is in fact authored by a Toby Mitchell and is on
the topic of estimating radiation doses (quite different from
the machine learning work of Tom Mitchell). Similarly, for
Michael Jordan, the most perplexing paper is on software
Research Track Paper
312
Table 1: Papers ranked by perplexity for C. Faloutsos, from 31 documents.
Paper Title
Perplexity Score
MindReader: Querying databases through multiple examples
1503.7
Efficient and effective querying by image content
1498.2
MEDIAN SCORE
603.5
Beyond uniformity and independence: analysis of R-trees using the concept of fractal dimension
288.9
The TV-tree: an index structure for high-dimensional data
217.2
Table 2: Papers ranked by perplexity for M. Jordan, from 33 documents.
Paper Title
Perplexity Score
Software configuration management in an object oriented database
1386.0
Are arm trajectories planned in kinematic or dynamic coordinates? An adaptation study
1319.2
MEDIAN SCORE
372.4
On convergence properties of the EM algorithm for Gaussian mixtures
180.0
Supervised learning from incomplete data via an EM approach
179.0
Table 3: Papers ranked by perplexity for T. Mitchell from 15 documents.
Paper Title
Perplexity Score
A method for estimating occupational radiation dose to individuals, using weekly dosimetry data
2002.9
Text classification from labeled and unlabeled documents using EM
845.4
MEDIAN SCORE
411.5
Learning one more thing
266.5
Explanation based learning for mobile robot perception
264.2
configuration management and was written by Mick Jordan
of Sun Microsystems. In fact, of the 7 most perplexing papers
for M Jordan, 6 are on software management and the
JAVA programming language, all written by Mick Jordan.
However, the 2nd most perplexing paper was in fact coauthored
by Michael Jordan, but in the area of modeling of
motor planning, which is a far less common topic compared
to the machine learning papers that Jordan typically writes.
AN AUTHOR-TOPIC BROWSER
We have built a JAVA-based query interface tool that supports
interactive querying of the model
1
. The tool allows a
user to query about authors, topics, documents, or words.
For example, given a query on a particular author the tool
retrieves and displays the most likely topics and their probabilities
for that author, the 5 most probable words for each
topic, and the document titles in the database for that author
. Figure 6(a) (top panel) shows the result of querying
on Pazzani M and the resulting topic distribution (highly-ranked
topics include machine learning, classification, rule-based
systems, data mining, and information retrieval).
Mouse-clicking on one of the topics (e.g., the data mining
topic as shown in the figure) produces the screen display to
the left (Figure 6(b)). The most likely words for this topic
and the most likely authors given a word from this topic are
then displayed. We have found this to be a useful technique
for interactively exploring topics and authors, e.g., which
authors are active in a particular research area.
Similarly, one can click on a particular paper (e.g., the
paper A Learning Agent for Wireless News Access as shown
in the lower screenshot (Figure 6(c)) and the display in the
panel to the right is then produced. This display shows the
words in the documents and their counts, the probability
distribution over topics for the paper given the word counts
1
A prototype online version of the tool can be accessed at
http://www.datalab.uci.edu/author-topic
.
(ranked by highest probability first), and a probability distribution
over authors, based on the proportion of words
assigned by the model to each topic and author respectively.
The system is implemented using a combination of a relational
database and real-time Bayesian estimation (a relatively
rare combination of these technologies for a real-time
query-answering system as far as we are aware). We use
a database to store and index both (a) the sparse author-topic
and topic-word count matrices that are learned by our
algorithm from the training data, and (b) various tables describing
the data such as document-word, document-author,
and document-title tables. For a large document set such
as CiteSeer (and with 300 topics) these tables can run into
the hundred's of megabytes of memory--thus, we do not
load them into main memory automatically but instead issue
SQL commands to retrieve the relevant records in real-time.
For most of the queries we have implemented to date the
queries can be answered by simple table lookup followed by
appropriate normalization (if needed) of the stored counts
to generate conditional probabilities. For example, displaying
the topic distribution for a specific author is simply a
matter of retrieving the appropriate record. However, when
a document is the basis of a query (e.g., as in the lower
screenshot, Figure 6(c)) we must compute in real-time the
conditional distribution of the fraction of words assigned to
each topic and author, a calculation that cannot be computed
in closed form. This requires retrieving all the relevant
word-topic counts for the words in the document via
SQL, then executing the estimation algorithm outlined in
Section 4.2 in real-time using Gibbs sampling, and displaying
the results to the user. The user can change adjust the
burn-in time, the number of samples and the lag time in the
sampling algorithm--typically we have found that as few as
10 Gibbs samples gives quite reasonable results (and takes
on the order of 1 or 2 seconds depending on the machine
being used other factors).
Research Track Paper
313
(b)
(a)
(c)
Figure 6: Examples of screenshots from the interactive query browser for the author-topic model with (a)
querying on author Pazzani M, (b) querying on a topic (data mining) relevant to that author, and (c) querying
on a particular document written by the author.
Research Track Paper
314
CONCLUSIONS
We have introduced a probabilistic algorithm that can
that can automatically extract information about authors,
topics, and documents from large text corpora. The method
uses a generative probabilistic model that links authors to
observed words in documents via latent topics. We demon-strated
that Bayesian estimation can be used to learn such
author-topic models from very large text corpora, using CiteSeer
abstracts as a working example. The resulting CiteSeer
author-topic model was shown to extract substantial novel
"hidden" information from the set of abstracts, including
topic time-trends, author-topic relations, unusual papers for
specific authors and so forth. Other potential applications
not discussed here include recommending potential reviewers
for a paper based on both the words in the paper and the
names of the authors. Even though the underlying probabilistic
model is quite simple, and ignores several aspects of
real-world document generation (such as topic correlation,
author interaction, and so forth), it nonetheless provides a
useful first step in understanding author-topic structure in
large text corpora.
Acknowledgements
We would like to thank Steve Lawrence, C. Lee Giles, and
Isaac Council for providing the CiteSeer data used in this
paper. We also thank Momo Alhazzazi, Amnon Meyers,
and Joshua O'Madadhain for assistance in software development
and data preprocessing. The research in this paper
was supported in part by the National Science Foundation
under Grant IRI-9703120 via the Knowledge Discovery and
Dissemination (KD-D) program.
References
Blei, D. M., Ng, A. Y., and Jordan, M. I., (2003) Latent
Dirichlet allocation, Journal of Machine Learning Research
3, pp. 9931022.
Buntine, W.L. (1994) Operations for learning with graphical
models, Journal of Artificial Intelligence Research
2, pp. 159-225.
Cutting, D., Karger, D. R., Pederson, J., and Tukey, J.
W. (1992) Scatter/Gather: a cluster-based approach
to browsing large document collections, in Proceedings
of the 15th Annual International ACM SIGIR Conference
on Research and Development in Information
Retrieval, pp. 318329.
Deerwester, S. C., Dumais, S. T., Landauer, T. K., Furnas,
G. W., and Harshman, R. A. (1990) Indexing by latent
semantic analysis, Journal of the American Society of
Information Science, 41(6), pp. 391407.
Diederich, J., Kindermann, J., Leopold, E., and Paass, G.
(2003) Authorship attribution with support vector machines
, Applied Intelligence 19 (1).
Erten, C., Harding, P. J., Kobourov, S. G., Wampler, K.,
and Yee, G. (2003) Exploring the computing literature
using temporal graph visualization, Technical Report,
Department of Computer Science, University of Arizona
.
Gray, A., Sallis, P., MacDonell, S. (1997) Software forensics
: Extending authorship analysis techniques to computer
programs, Proceedings of the 3rd Biannual Conference
of the International Association of Forensic
Linguists (IAFL), Durham NC.
Griffiths, T. L., and Steyvers , M. (2004) Finding scientific
topics, Proceedings of the National Academy of
Sciences, 101 (suppl. 1), 52285235.
Hofmann, T. (1999) Probabilistic latent semantic indexing
, in Proceedings of the 22nd International Conference
on Research and Development in Information Retrieval
(SIGIR'99).
Kautz, H., Selman, B., and Shah, M. (1997) Referral Web:
Combining social networks and collaborative filtering,
Communications of the ACM, 3, pp. 6365.
Lagus, K, Honkela, T., Kaski, S., and Kohonen, T. (1999)
WEBSOM for textual data mining, Artificial Intelligence
Review, 13 (56), pp. 345364.
Lawrence, S., Giles, C. L., and Bollacker, K. (1999) Digital
libraries and autonomous citation indexing, IEEE
Computer, 32(6), pp. 6771.
McCallum, A., Nigam, K., and Ungar, L. (2000) Efficient
clustering of high-dimensional data sets with application
to reference matching, in Proceedings of the Sixth
ACM SIGKDD Conference on Knowledge Discovery
and Data Mining, pp. 169178.
Mosteller, F., and Wallace, D. (1964) Applied Bayesian and
Classical Inference: The Case of the Federalist Papers,
Springer-Verlag.
Mutschke, P. (2003) Mining networks and central entities
in digital libraries: a graph theoretic approach applied
to co-author networks, Intelligent Data Analysis
2003, Lecture Notes in Computer Science 2810,
Springer Verlag, pp. 155166
Newman, M. E. J. (2001) Scientific collaboration networks:
I. Network construction and fundamental results, Physical
Review E, 64, 016131.
Popescul, A., Flake, G. W., Lawrence, S., Ungar, L. H., and
Giles, C. L. (2000) Clustering and identifying temporal
trends in document databases, IEEE Advances in
Digital Libraries, ADL 2000, pp. 173182.
Rosen-Zvi, M., Griffiths, T., Steyvers, M., Smyth, P. (2004)
The author-topic model for authors and documents,
Proceedings of the 20th UAI Conference, July 2004.
Thisted, B., and Efron, R. (1987) Did Shakespeare write a
newly discovered poem?, Biometrika, pp. 445455.
White, S. and Smyth, P. (2003) Algorithms for estimating
relative importance in networks, in Proceedings of the
Ninth ACM SIGKDD Conference on Knowledge Discovery
and Data Mining, pp. 266275.
Yang, Y. (1999) An evaluation of statistical approaches to
text categorization, Information Retrieval, 1, pp. 69
90.
Research Track Paper
315 | Gibbs sampling;text modeling;unsupervised learning |
152 | Proportional Search Interface Usability Measures | Speed, accuracy, and subjective satisfaction are the most common measures for evaluating the usability of search user interfaces. However, these measures do not facilitate comparisons optimally and they leave some important aspects of search user interfaces uncovered. We propose new, proportional measures to supplement the current ones. Search speed is a normalized measure for the speed of a search user interface expressed in answers per minute. Qualified search speed reveals the trade-off between speed and accuracy while immediate search accuracy addresses the need to measure success in typical web search behavior where only the first few results are interesting. The proposed measures are evaluated by applying them to raw data from two studies and comparing them to earlier measures. The evaluations indicate that they have desirable features. | INTRODUCTION
In order to study the usability of search user interfaces we
need proper measures. In the literature, speed, accuracy and
subjective satisfaction measures are common and reveal
interesting details. They have, however, a few
shortcomings that call for additional measures.
First, comparing results even within one experiment--let
alone between different experiments--is hard because the
measures are not typically normalized in the research
reports but multiple raw numbers (like answers found and
time used) are reported. Of course, unbiased comparison
between studies will always be difficult as the test setup has
a big effect on the results, but the problem is compounded
by the presentation of multiple task dependent measures. A
good measure would be as simple as possible, yet it must
not discard relevant information.
Second, the current measures do not reveal the sources of
speed differences. In particular, the relation between speed
and accuracy may be hard to understand since the current
measures for those dimensions are completely separate. For
example, it is essential to know if the increase in speed is
due to careless behavior or better success.
Third, in the web environment, a typical goal for a search is
to find just a few good enough answers to a question. This
is demonstrated by studies that show that about half of the
users only view one or two result pages per query [11].
Current search user interface usability measures do not
capture the success of such a behavior very well.
In order to address these problems, we present three new
proportional, normalized usability measures. The new
measures are designed for the result evaluation phase of the
search process [10] where real users are involved. Search
speed is a normalized speed measure expressed in answers
per minute. It makes within study comparisons simple and
between studies bit more feasible. Qualified search speed is
a combination of speed and accuracy measures that reveals
the tradeoff between speed and accuracy. It shows the
source of speed differences in terms of accuracy and is also
measured in answers per minute. Immediate search
accuracy is a measure that captures the success of result
evaluation when only the first few hits are interesting.
These new measures are evaluated by applying them to
data from real experiments and comparing them to
conventional measures.
RELATED WORK
In usability evaluations, the measurements are typically
based on the three major components of usability:
effectiveness, efficiency, and satisfaction [3, 4].
International ISO 9241-11 standard [4] defines
effectiveness as the "accuracy and completeness with which
the users achieve specified goals" and efficiency as
"resources expended in relation to the accuracy and
completeness with which users achieve goals". According
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page.
To copy
otherwise, to republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
NordiCHI '04, October 23-27, 2004 Tampere, Finland
Copyright 2004 ACM 1-58113-857-1/04/10... $5.00
365
to the standard, efficiency measure divides the
effectiveness (achieved results) by the resources used (e.g.
time, human effort, or cost). In this work, we will leave
satisfaction measures out of the discussion and concentrate
on objective quantitative measures.
Usability measurements are strongly domain dependent. In
the search user interface domain effectiveness is typically
measured in terms of accuracy (which is recognized as an
example measure in the ISO standard as well). Time (speed
of use) is typically used as the critical resource when
calculating the efficiency.
In the following we will discuss measuring practices in
typical studies evaluating search user interfaces. Note that
although almost every study in the information retrieval
community deals with searching, they tend to focus on
system performance [8] and thus only a few studies are
mentioned here.
Speed Measures
The basic approach for measuring the speed is simply to
measure the time required for performing a task, but the
actual implementation differs from study to study. In early
evaluations of the Scatter/Gather system by Pirolli et al.
[6], times were recorded simply on a task basis. In the
results they reported how many minutes it took, on average,
to complete a task. In the study by Dumais et al. [2],
roughly the same method was used, except that the times
were divided into categories according to the difficulty of
the task. Sebrechts et al. [9] used a different categorization
method where task execution times were divided into
categories according to the subject's computer experience.
Time measurements can also be recorded in a somewhat
reversed manner as Pratt and Fagan [7] did. They reported
how many results users found in four minutes. This is close
to measuring speed (achievement / time), but this
normalization to four minutes is arbitrary and does not
facilitate comparisons optimally. In a study by Dennis et al.
[1], the time to bookmark a result page was measured and
only one page was bookmarked per task. This setup makes
the comparison fairly easy since the reported time tells how
much time it takes to find a result with the given user
interface. However, this desirable feature was caused by the
setup where only one result was chosen, and other types of
tasks were not considered.
Accuracy Measures
Accuracy measures are based on the notion of relevance
which is typically determined by independent judges in
relation to a task. In information retrieval studies, accuracy
is typically a combination of two measures: recall and
precision. Recall describes the amount of relevant results
found in a search in a relation to all the relevant results in
the collection. As a perfect query in terms of recall could
return all the entries in the collection, it is counterbalanced
with the precision measure. Precision describes how clean
the result set is by describing the density of relevant results
in it. Precision, like recall, is expressed by a percentage
number which states the proportion of relevant targets in
the result set.
Recall and precision measures are designed for measuring
the success of a query. In contrast, when the success of the
result evaluation process is studied, the users need to
complete the process by selecting the interesting results.
Measures are then based on analyzing the true success of
the selections. Recall and precision measures are used here
too, but the calculation is different. In these cases recall
describes the amount of relevant results selected in relation
to the amount of them in the result set. Precision, on the
other hand, describes the density of relevant results among
the selected results.
Veerasamy and Heikes [13] used such measures (called
interactive recall and interactive precision) in their study of
a graphical display of retrieval studies. They asked
participants to judge the relevance of the results in order to
get the users' idea of the document relevance. Pirolli et al.
[6] used only the precision measure in their test of the
Scatter/Gather system. The selection of the results was
implemented by a save functionality. Dennis et al. [1] used
an approach where they reported the average relevance of
the results found with a given user interface. Relevant
results were indicated by bookmarking them. Further
variations of the measures where user interaction is taken
into account in accuracy evaluation were proposed and
used by Veerasamy and Belkin [12].
Information Foraging Theory
Stuart Card, Peter Pirolli and colleagues have made
extensive research on information foraging theory [5] in
Xerox Parc and the results are relevant here as well. In its
conventional form information foraging theory states that
the rate of gain of valuable information (R) can be
calculated using the formula:
W
B
T
T
G
R
+
=
(1)
In the formula, G is the amount of gained information, T
B
is
the total time spent between information patches and T
W
is
the total time spent within an information patch [5]. An
information patch is understood to mean a collection of
information such as a document collection, a search result
collection or even a single document that can be seen to be
a collection of information that requires some actions for
digesting the information. In the information foraging
process, the forager navigates first between patches and
then finds actual meaningful information within a patch.
The process is then started over by seeking a new patch.
If we discard the separation of two different types of
activities (between and within patches) for simplicity,
equation 1 states the information gain rate in terms of time
unit. This matches with common practices in the field and
is the basis for our proposed measurements as well.
366
The gap that is left in the information foraging theory in a
relation to making concrete measurements, is the definition
of information gain. The gap is well justified as the
definition would unnecessarily reduce the scope of the
theory. On the other hand, when we deal with concrete
problems, we can be more specific and thus obtain
preciseness. This is our approach here: we apply the basic
relationships stated in the information foraging theory and
provide meaningful ways of measuring the gain. All this is
done in the context of evaluating search user interfaces in
the search result evaluation phase. We will get back to this
topic in the discussions of the new measures to see their
relationship to the information foraging theory in more
detail.
EXPERIMENT
We will evaluate the proposed measures using data from an
experiment of ours. This experiment was conducted to
evaluate a new search user interface idea by comparing it to
the de facto standard solution.
Our proposed user interface used automatically calculated
categories for facilitating the result access (Figure 1, left).
As the categories we used the most common words and
phrases found within the result titles and text summaries
(snippets). Stop word list and a simple stemmer were used
for improving the quality of the categories (e.g. discarding
very common words such as `and' or `is'). As the category
word (or phrase) selection was based solely on the word
frequencies, the categories were neither exclusive nor
exhaustive. There was a special built-in category for
accessing all the results as one long list. The hypothesis
behind the category user interface was such that it would
allow users to identify and locate interesting results easier
and faster than the conventional solution.
The calculated categories were presented to the user as a
list beside the actual result list. When a category was
selected from the list, the result listing was filtered to
display only those result items that contained the selected
word or phrase. There were a total of 150 results that the
user could access and from which the categories were
computed.
Participants
There were 20 volunteer participants (8 male, 12 female) in
the experiment. Their average age was 35 years varying
from 19 to 57 years and they were recruited from the local
university. Almost all of the participants can be regarded as
experienced computer users, but none of them was an
information technology professional.
Apparatus
There were two user interfaces to access the search results:
1.
The category interface (category UI, Figure 1, left)
presented the users with a list of 15 automatically
generated categories on the left side of the user
interface. When the user selected a category, the
corresponding results were shown on the right side of
the user interface much like in popular e-mail clients.
2.
The reference interface (reference UI, Figure 1, right)
was a Google web search engine imitation showing
results in separate pages, ten results per page. The order
of the results was defined by the search engine
(Google). In the bottom of the window, there were
controls to browse the pages in order (Previous and
Figure 1. Compared user interfaces in our experiment. Category user interface on the left,
reference user interface on the right.
367
Next buttons) or in random order (a radio button for
each page). There were 15 pages so that the participants
could access a total of 150 results.
Design and Procedure
The experiment had search user interface as the only
independent variable with two values: category UI and
reference UI. The values of the independent variable were
varied within the subjects and thus the analysis was done
using repeated measures tools. As dependent variables we
measured: 1) time to accomplish a task in seconds, 2)
number of results selected for a task, 3) relevance of
selected result in a three step scale (relevant, related, not
relevant), and 4) subjective attitudes towards the systems.
The experiments were carried out in a usability laboratory.
One experiment lasted approximately 45 minutes and
contained 18 (9+9) information seeking tasks in two
blocks: one carried out with the category interface and the
other using the reference interface. The order of the blocks
and the tasks were counterbalanced between the
participants. For each task, there was a ready-made query
and users did not (re)formulate the queries themselves. This
kind of restriction in the setup was necessary to properly
focus on measuring the success in the result evaluation
phase of the search.
The actual task of the participant was to "collect as many
relevant results for the information seeking task as possible
as fast as you can". The participants collected results by
using check boxes that were available beside each result
item (see Figure 1).
In the test situation there were two windows in the
computer desktop. The task window displayed information
seeking tasks for the participants who were instructed to
first read the task description, then push the `Start' button
in the task window and promptly proceed to accomplish the
task in the search window. Upon task completion
(participant's own decision or time-out), the participants
were instructed to push the `Done' button in the task
window. The time between `Start' and `Done' button
presses was measured as the total time for the task. This
timing scheme was explained to the participants. Time for
each task was limited to one minute.
Accuracy measures are based on ratings done by the
experimenter (one person). The rating judgments were
made based solely on the task description and the very
same result title and summary texts that the participants
saw in the experiment. Actual result pages were not used
because it would have added an extra variable into the
design (result summary vs. page relation), which we did not
wish. All the tasks had at least two defining concepts like in
"Find pictures of planet Mars". For relevant results, all of
the concepts was required to be present in some form
(different wording was of course allowed). Related results
were those where only the most dominant concept was
present (e.g. planet Mars). Rest of the results was
considered to be not relevant.
RESULTS
For comparing the proposed measures we present here the
results of our experiment using the conventional measures:
time, number of results, and precision. The time measure
did not reveal very interesting results, because the test setup
limited the total time for one task to one minute. Thus the
mean times for conditions were close to each other: 56.6
seconds (sd = 5.5) for the category UI and 58.3 seconds
(sd = 3.5) for the reference UI. The difference is not
statistically significant as repeated measures analysis of
variance (ANOVA) gives F(1,19) = 3.65, ns.
In contrast, number of results revealed a difference. When
using the category UI the participants were able to find on
average 5.1 (sd = 2.1) results per task whereas using the
reference UI yielded on average 3.9 (sd = 1.2) selections.
The difference is significant since ANOVA gives F(1,19) =
9.24, p < .01.
Precision measure gave also a statistically significant
difference. When using the category UI on average 65%
(sd = 13) of the participants' selections were relevant in a
relation to the task. The corresponding number for the
reference UI was 49% (sd = 15). ANOVA gave
F(1,19) = 14.49, p < .01.
The results are compatible with other studies done with
similar categorizing search user interfaces. For example,
Pratt and Fagan [7] have also reported similar results in
favor of categorizing user interface. When categories work,
they enhance the result evaluation process by reducing the
number of items that need to be evaluated. Users find
interesting looking categories and evaluate only the results
within those categories. Concentration of relevant
documents in the interesting categories is higher than in the
whole result set.
SEARCH SPEED
In order to make the comparison of speed measures easier,
we suggest a proportional measure. When the search time
and number of results are combined into one measure, just
like in measuring physical speed by kilometers or miles per
hour, we get a search user interface
search speed measure
expressed in answers per minute (APM). It is calculated by
dividing the number of answers found by the time it took to
find them:
searched
minutes
found
ans/wers
speed
search
=
(2)
In relation to the ISO-9241-11 standard this is an efficiency
measure whereas the plain number of answers is an
(simple) effectiveness measure. In terms of information
foraging theory, we replace the G term in equation 1 with
number of results found and the time is normalized to
minutes. This concretizes the rate (R) in equation 1 to be
answers per minute. The structure of equations 1 and 2 is
essentially the same.
368
Whenever two (or more) measures are reduced into one,
there is a risk of loosing relevant information. This is the
case here as well. The proposed measure does not make the
distinction between a situation where one answer is found
in 10 seconds and a situation where four answers are found
in 40 seconds. In both cases the speed is 6 answers per
minute and the details of the situation are lost in the
measurement. However, we feel that speed measure is
nevertheless correct also in this case. The situation can be
compared to driving 50 km/h for 10 or 40 minutes. The
traveled distance is different, but the speed is the same.
This means that proposed speed measure does not apply in
every situation and attention must be paid in measurement
selection.
The problem of reducing two measures into one has also
been thoroughly discussed by Shumin Zhai [14] in the
context of input devices. He points out that reduction of
two Fitts' law variables (a and b) in calculating throughput
of an input device leads to a measure that is dependent of
the task. The same problem does not apply here as our
situation is not related to Fitts' law. However, our measure
is dependent on the task, but it is not dependent of the used
time or the number of results collected like previous
measures.
Evaluation
In order to evaluate the suggested measure it was applied to
the results of Scatter/Gather evaluation by Pirolli et al. [6].
In their experiment the task was to find relevant documents
for a given topic. The table below summarizes the results
(SS = similarity search, SG = scatter/gather):
Measurement
SS
SG
Original
Time used in minutes
10.10
30.64
Number of answers
16.44
12.26
Search speed
Answers per minute
1.62
0.40
The first two rows show the actual numbers reported in the
paper while the third row shows the same results in answers
per minute. It is arguably easier to understand the
relationship between the two user interfaces from the
normalized search speed measure. It communicates that the
SS condition was roughly four times faster than the SG
condition. The relation is hard to see from the original
results. In addition, measurements can be easily related to
one's own experiences with similar user interfaces because
of the normalization.
In the second table below, the search speed measure is
applied to the data from our own experiment. Here the
difference between raw numbers and normalized measure
is not as large as in the previous example because the time
used for the tasks is roughly the same in both cases due to
the test setup. Nevertheless, the suggested measure makes
the comparison easier. Note also that the fairly large
difference with the speeds in the experiment by Pirolli et al.
is presumably due to experiment set-up (tasks, conditions,
equipment, etc.).
Measurement
Category UI
Reference UI
Raw numbers
Time used in minutes
0.94
0.97
Number of answers
5.1
3.9
Search speed
Answers per minute
5.4
4.0
When an analysis of variance is calculated on the answers
per minute measure, we see a bit stronger result compared
to the conventional measures where just the number of
results revealed significant difference. Here ANOVA gives
F(1,19) = 11.3, p < .01. Slight increase in the F statistic is
due to the combination of two measures that both have a
difference in the same direction. In summary, search speed
measures the same phenomena as the previously used
measures (it is calculated from the same numbers) and it
can make distinctions between the measured objects.
QUALIFIED SEARCH SPEED
Previously used recall and precision measures do not
directly tell where possible speed differences come from or
what the relation between speed and accuracy is. The
suggested
qualified search speed measure refines the
search speed measure with categories of relevance to
address this shortcoming. To keep the measure
understandable and robust, we use only two or three
categories of relevance. Like the previous measure, the
qualified search speed is also measured in answers per
minute, with a distinction that the speed is calculated
separately for each relevance category according to the
equation 3. There
RCi
stands for relevance category i
(typical categories are e.g. relevant and irrelevant).
searched
minutes
found
answers
speed
search
qualified
RCi
RCi
=
(3)
Note that the sum over all relevance categories equals to
the normal search speed.
When qualified search speed is described in information
foraging terminology, we can see that the gain is now
defined more precisely than with search speed. While
search speed takes into account only the number of results,
qualified search speed adds the quality of the results into
the equation. In essence, this gives us a more accurate
estimate of the gain of information, and thus a more
accurate rate of information gain. Note that this shows also
in the rate magnitude: rate is now stated in (e.g.) number of
relevant results per minute.
Evaluation
When the qualified search speed measure is applied to the
data of our experiment and compared to the simple measure
of precision, a few observations can be made. First, the
proposed measure preserves the statistically significant
369
difference that was observed with the conventional
precision measure. ANOVA for the speed of acquiring
relevant results gives F(1,19) = 32.4, p < .01.
Second, both measures (Figure 2) convey roughly the same
information about the precision of the user interfaces
including: 1) with the category UI more than half of the
selected results were relevant whereas with the reference
UI about half of the results were relevant, and 2) using the
category UI participants were more successful in terms of
precision. However, with the suggested qualified search
speed measure, the amplitude of difference in precision is
not obvious and thus the new measure cannot replace the
old one.
Third, in addition to what can be seen in the precision
chart, the qualified search speed chart (Figure 2) reveals
some interesting data. It shows that the improvement in
speed is due to the fact that participants have been able to
select more relevant results while the proportion of not
relevant results decreased a bit. The same information
could surely be acquired by combining conventional speed
and precision measures, but when the information is visible
in one figure it is arguably easier to find such a
relationship. Note also that although the new measure is
mainly concerned about the accuracy of use, it informs the
reader simultaneously about the speed of use as well.
Figure 3 makes a comparison between the new measure
and the original precision measure using the data collected
in the Scatter/Gather experiment [6]. Here it is worthwhile
to note that even though precision measures are close to
those in the previous example, the qualified search speed
measure reveals large differences between the conditions.
Qualified search speed seems to reveal the tradeoff between
accuracy and speed convincingly in this case. We can also
notice that both conditions here are much slower than those
in Figure 2 as the qualified search speed is normalized just
like the simpler search speed.
It is notable that qualified search speed does not measure
the same phenomena as precision and thus they are not
replaceable. We can image a situation where high qualified
speed is associated with low precision and vice versa. In
reality this could happen when users try to be very precise
in one condition and very fast in another. On the other
hand, we saw that qualified evaluation speed can make
clear distinctions between user interfaces, which is a
compulsory quality for a useful measure.
IMMEDIATE ACCURACY
The last suggested measure captures the success of typical
web search behavior. In such a task, the user wants to find a
piece of information that would be good enough for an
information need and overall speed and accuracy are not as
important as quick success. The measure is called
immediate accuracy and it is expressed as a success rate.
The success rate states the proportion of cases where at
least one relevant result is found by the n
th
selection. For
applying the measure, the order of each result selection
must be stored and the relevance of them must be judged
against the task. The selections for each task and participant
are then gone through in the order they were made and the
frequency of first relevant result finding is calculated for
each selection (first, second, and so on). When this figure is
divided by the total number of observations (number of
participants * number of tasks) we get the percentage of
first relevant result found per each selection. Equation 4
shows the calculation more formally, there n stands for n
th
selection.
ns
observatio
of
number
total
n
results
relevant
first
of
number
n
accuracy
immediate
=
(4)
When the figures calculated with equation 4 are plotted into
a cumulative line chart (Figure 4) we can see when at least
one relevant result is found on average. For example, (in
Figure 4) after the second selection in 79 % of the cases at
least one relevant result is found when using the category
user interface. Notice also that the lines do not reach the
100 % value. This means that in some of the cases the users
were not able to find any relevant results.
When looking back to information foraging theory, this
measure takes us to a different approach compared to the
previous ones. This measure abandons time as the limiting
Figure 3. Qualified search speed measure compared to precision
measure in the Scatter/Gather study [4].
Figure 2. Qualified search speed measure compared to precision
measure of data gathered in our own study.
370
resource against which the gain is compared and replaces it
by selection ordinal (remember that ISO standard leaves the
choice of resource up to the domain). As this new resource
is discrete in nature, the expression of the measure as a
single figure (rate) becomes hard and thus, for example, a
cumulative chart is preferred for easily interpretable
figures. From another perspective of information foraging
theory, we can say that immediate accuracy is a measure
for estimating the beginning of the within patch gain slope.
Note, that it is only an estimation of the beginning of the
slope as all subsequent relevant selections are discarded in
this measure. In this view, we define an information patch
to be a search result set.
Evaluation
The evaluation is based only on our own data because the
measure requires information that is typically not reported
in the publications. Figure 4 shows that the user orientates
faster while using the category UI as the first selection
produces already a relevant result in 56 % of the cases. In
contrast, the reference UI produces a relevant result in
40 % of the first selections. By the second selection, the
difference is bit greater since in 79 % of the cases the users
have found at least one relevant result with the category UI,
while the corresponding number for the reference UI is
62 %.
In the analysis of cumulative data, the most interesting
points are those where the difference between compared
measurements changes. Change points are most important
because cumulative figures will preserve the difference if
not further changes happen. In our case the difference is
made at the first selection and remains virtually the same
afterwards. This difference is statistically significant as
ANOVA gives F(1,19) = 12.5, p < .01 and it is preserved
throughout the selections (F(1,19) 10.4, p < .01 for all
subsequent selections).
Findings of Spink et al. [11] stated that users only select
one or two results per query. Immediate accuracy allows us
to see the success of the studied user interface in such a
case. We can focus on a given selection and quickly see the
success rate at that point. Note that this kind of information
is not available using the conventional accuracy measures
and straightforward speed measures.
Immediate Success Speed
Another fairly simple and obvious way for measuring
immediate success would be to record the time to the first
relevant result. We did try this measure as well, but found a
problem.
In our experiment, the average time to find the first relevant
result was practically the same in both cases (20 and 21
seconds for category and reference UI respectively) and
there was no statistically significant difference. This could,
of course, be the true situation, but the amount of relevant
results suggested the opposite.
The problem comes from the fact that the first relevant
result is not always found. With the category UI users were
not able to find a single relevant result for a task in 10% of
the cases whereas the same number for reference UI was
21%. We felt that this is a big difference and that it should
be visible in the measurement as well. However, we were
not able to come up with a reasonable solution for
normalizing the time measurement in this respect and thus
the measurement is not promoted as such.
In addition, the results of Spink et al. [11] suggest that the
time to first relevant result is not very important for the
search process. Since searchers tend to open only one or
two results, the time does not seem to be the limiting factor,
but the number of result selections is. This supports also the
choice of immediate accuracy over the time to the first
relevant result.
DISCUSSION
Our goal was to provide search user interface designers,
researchers, and evaluators with additional measures that
would complement the current ones. The first problem with
them is that result comparison is hard, even within one
experiment. Proportional measures makes within study
comparisons easy and in addition they let readers relate
their previous experience better to the presented results. We
proposed normalized search speed measure that is
expressed in answers per minute. As the measure combines
two figures (number of answers and time searched) into
one proportional number, it makes the comparisons within
an experiment easy and between experiments bit more
feasible.
The second shortcoming of the current measures is the fact
that it is difficult to see the tradeoff between speed and
accuracy. To address this problem, we proposed the
qualified search speed measure that divides the search
Immediate Accuracy
0 %
20 %
40 %
60 %
80 %
100 %
1.
2.
3.
4.
5.
6.
7.
8.
selection
success rate
Category UI
Reference UI
Figure 4. Immediate accuracy of category UI and
reference UI. The measure shows the proportion of the cases
where a relevant result have been found at n
th
selection.
371
speed measure into relevance categories. The measure
allows readers to see what the source of speed in terms of
accuracy is. In the evaluation we showed that conventional
measures may only tell the half of the story. For instance,
in the case of the Scatter/Gather experiment the precision
measure showed only moderate difference between the
systems whereas qualified speed revealed a vast difference
in the gain of relevant results. Combining speed and
accuracy measures is particularly effective in such a case as
it eliminates the need to mentally combine the two
measures (speed and accuracy).
The third weakness of the current measures is their inability
to capture users' success in typical web search behavior
where the first good enough result is looked for. We
proposed the immediate accuracy measure to solve this
flaw. Immediate accuracy shows the proportion of the cases
where the users are able to find at least one relevant result
per n
th
result selection. It allows readers to see how well
and how fast the users can orient themselves to the task
with the given user interface. As the measurements are
made based on finding the first relevant result, the reader
can compare how well different solutions support users'
goal of finding the first relevant answer (and presumably
few others as well) to the search task.
The proposed measures are not intended to replace the old
measures, but rather to complement them. They lessen the
mental burden posed to the reader as important information
of different type (e.g. speed, accuracy) is combined into
one proportional measure. In summary, the proposed
measures capture important characteristics of search user
interface usability and communicate them effectively.
The issue of making comparisons between experiments is
not completely solved by these new measures. We feel that
the problem is not in the properties of the new measures but
in the nature of the phenomena to be measured. In the
context of search user interfaces the test settings have a
huge effect on the results that cannot be solved simply with
new measures. One solution for the problem could be test
setup standardization. In the TREC interactive track such
an effort have been taken, but it seems that the wide variety
of research questions connected to searching cannot be
addressed with a single standard test setup.
ACKNOWLEDGMENTS
This work was supported by the Graduate School in User
Centered Information Technology (UCIT). I would like to
thank Scott MacKenzie, Kari-Jouko Rih, Poika Isokoski,
and Natalie Jhaveri for invaluable comments and
encouragement.
REFERENCES
1.
Dennis, S., Bruza, P., McArthur, R. Web Searching: A
Process-Oriented Experimental Study of Three
Interactive Search Paradigms. Journal of the American
Society for Information Science and Technology, Vol.
53, No. 2, 2002, 120-133.
2.
Dumais, S., Cutrell, E., Chen, H. Optimizing Search by
Showing Results in Context. Proceedings of ACM
CHI'01 (Seattle, USA), ACM Press, 2001, 277-284.
3.
Frkjr, E., Hertzum, M., Hornbk, K. Measuring
Usability: Are Effectiveness, Efficiency, and
Satisfaction Really Correlated? Proceedings of ACM
CHI'2000 (The Hague, Netherlands), ACM Press,
2000, 345-352.
4.
ISO 9241-11: Ergonomic requirements for office work
with visual display terminals (VDTs) - Part 11:
Guidance on usability, International Organization for
Standardization, March 1998.
5.
Pirolli, P. and Card, S. Information Foraging.
Psychological Review, 1999, Vol. 106, No. 4, 643-675.
6.
Pirolli, P., Schank, P., Hearst, M., Diehl, C.
Scatter/Gather Browsing Communicates the Topic
Structure of a Very Large Text Collection. Proceedings
of ACM CHI'96 (Vancouver, Canada), ACM Press,
1996, 213-220.
7.
Pratt, W., Fagan, L. The Usefulness of Dynamically
Categorizing Search Results. Journal of the American
Medical Informatics Association, Vol. 7, No. 6,
Nov/Dec 2000, 605-617.
8.
Saracevic, T. Evaluation of Evaluation in Information
Retrieval. Proceedings of ACM SIGIR'95 (Seattle,
USA), ACM Press, 1995, 138-146.
9.
Sebrechts, M., Vasilakis, J., Miller, M., Cugini, J.,
Laskowski, S. Visualization of Search Results: A
Comparative Evaluation of Text, 2D, and 3D Interfaces.
Proceedings of ACM SIGIR'99 (Berkeley, USA), ACM
Press, 1999.
10.
Shneiderman, B., Byrd, D., Croft, B. Clarifying Search:
A User-Interface Framework for Text Searches. D-Lib
Magazine, January 1997.
11.
Spink, A., Wolfram, D., Jansen, M., and Saracevic, T.:
Searching the Web: The Public and Their Queries.
Journal of the American Society for Information Science
and Technology, 2001, Vol. 52, No. 6, 226-234.
12.
Veerasamy, A., Belkin, N. Evaluation of a Tool for
Visualization of Information Retrieval Results.
Proceedings of ACM SIGIR'96 (Zurich, Switzerland),
ACM Press, 1996, 85-92.
13.
Veerasamy, A., Heikes, R. Effectiveness of a Graphical
Display of Retrieval Results. Proceedings of ACM
SIGIR'97 (Philadelphia, USA), ACM Press, 1997, 236-244
.
14.
Zhai, S. On the validity of Throughput as a
Characteristic of Computer Input. IBM Research
Report, RJ 10253, IBM Research Division. August
2002.
372 | usability evaluation;Search user interface;speed;usability measure;accuracy |
153 | Protected Interactive 3D Graphics Via Remote Rendering | Valuable 3D graphical models, such as high-resolution digital scans of cultural heritage objects, may require protection to prevent piracy or misuse, while still allowing for interactive display and manipulation by a widespread audience. We have investigated techniques for protecting 3D graphics content, and we have developed a remote rendering system suitable for sharing archives of 3D models while protecting the 3D geometry from unauthorized extraction . The system consists of a 3D viewer client that includes low-resolution versions of the 3D models, and a rendering server that renders and returns images of high-resolution models according to client requests. The server implements a number of defenses to guard against 3D reconstruction attacks, such as monitoring and limiting request streams, and slightly perturbing and distorting the rendered images. We consider several possible types of reconstruction attacks on such a rendering server, and we examine how these attacks can be defended against without excessively compromising the interactive experience for non-malicious users. | Protecting digital information from theft and misuse, a subset of the
digital rights management problem, has been the subject of much
research and many attempted practical solutions. Efforts to protect
software, databases, digital images, digital music files, and other
content are ubiquitous, and data security is a primary concern in
the design of modern computing systems and processes. However,
there have been few technological solutions to specifically protect
interactive 3D graphics content.
The demand for protecting 3D graphical models is significant. Contemporary
3D digitization technologies allow for the reliable and
efficient creation of accurate 3D models of many physical objects,
and a number of sizable archives of such objects have been created.
The Stanford Digital Michelangelo Project [Levoy et al. 2000], for
example, has created a high-resolution digital archive of 10 large
statues of Michelangelo, including the David. These statues represent
the artistic patrimony of Italy's cultural institutions, and the
contract with the Italian authorities permits the distribution of the
3D models only to established scholars for non-commercial use.
Though all parties involved would like the models to be widely
available for constructive purposes, were the digital 3D model of
the David to be distributed in an unprotected fashion, it would soon
be pirated, and simulated marble replicas would be manufactured
outside the provisions of the parties authorizing the creation of the
model.
Digital 3D archives of archaeological artifacts are another example
of 3D models often requiring piracy protection. Curators of such
artifact collections are increasingly turning to 3D digitization as a
way to preserve and widen scholarly usage of their holdings, by allowing
virtual display and object examination over the Internet, for
example. However, the owners and maintainers of the artifacts often
desire to maintain strict control over the use of the 3D data and
to guard against theft. An example of such a collection is [Stanford
Digital Forma Urbis Project 2004], in which over one thousand
fragments of an ancient Roman map were digitized and are being
made available through a web-based database, providing that the
3D models can be adequately protected.
Other application areas such as entertainment and online commerce
may also require protection for 3D graphics content. 3D character
models developed for use in motion pictures are often repurposed
for widespread use in video games and promotional materials. Such
models represent valuable intellectual property, and solutions for
preventing their piracy from these interactive applications would be
very useful. In some cases, such as 3D body scans of high profile
actors, content developers may be reluctant to distribute the 3D
models without sufficient control over reuse. In the area of online
commerce, a number of Internet content developers have reported
an unwillingness of clients to pursue 3D graphics projects specifically
due to the lack of ability to prevent theft of the 3D content
[Ressler 2001].
Prior technical research in the area of intellectual property protections
for 3D data has primarily concentrated on 3D digital watermarking
techniques. Over 30 papers in the last 7 years describe
steganographic approaches to embedding hidden information into
3D graphical models, with varying degrees of robustness to attacks
that seek to disable watermarks through alterations to the 3D shape
or data representation. Many of the most successful 3D watermarking
schemes are based on spread-spectrum frequency domain
transformations, which embed watermarks at multiple scales by introducing
controlled perturbations into the coordinates of the 3D
model vertices [Praun et al. 1999; Ohbuchi et al. 2002]. Complementary
technologies search collections of 3D models and examine
them for the presence of digital watermarks, in an effort to detect
piracy.
We believe that for the digital representations of highly valuable
3D objects such as cultural heritage artifacts, it is not sufficient to
detect piracy after the fact; we must instead prevent it. The computer
industry has experimented with a number of techniques for
preventing unauthorized use and copying of computer software and
digital data. These techniques have included physical dongles, software
access keys, node-locked licensing schemes, copy prevention
software, program and data obfuscation, and encryption with embedded
keys. Most such schemes are either broken or bypassed by
determined attackers, and cause undue inconvenience and expense
for non-malicious users. High-profile data and software is particularly
susceptible to being quickly targeted by attackers.
Fortunately, 3D graphics data differs from most other forms of digital
media in that the presentation format, 2D images, is fundamen-tally
different from the underlying representation (3D geometry).
Usually, 3D graphics data is displayed as a projection onto a 2D
display device, resulting in tremendous information loss for single
views. This property supports an optimistic view that 3D graphics
systems can be designed that maintain usability and utility, while
not being as vulnerable to piracy as other types of digital content.
In this paper, we address the problem of preventing the piracy of 3D
models, while still allowing for their interactive display and manipulation
. Specifically, we attempt to provide a solution for maintainers
of large collections of high-resolution static 3D models, such as
the digitized cultural heritage artifacts described above. The methods
we develop aim to protect both the geometric shape of the 3D
models, as well as their particular geometric representation, such
as the 3D mesh vertex coordinates, surface normals, and connectivity
information. We accept that the coarse shape of visible objects
can be easily reproduced regardless of our protection efforts, so we
concentrate on defending the high-resolution geometric details of
3D models, which may have been most expensive to model or measure
(perhaps requiring special access and advanced 3D digitizing
technology), and which are most valuable in exhibiting fidelity to
the original object.
In the following paper sections, we first examine the graphics
pipeline to identify its possible points of attack, and then propose
several possible techniques for protecting 3D graphics data from
such attacks. Our experimentation with these techniques led us to
conclude that remote rendering provides the best solution for protecting
3D graphical models, and we describe the design and implementation
of a prototype system in Section 4. Section 5 describes
some types of reconstruction attacks against such a remote rendering
system and the initial results of our efforts to guard against
them.
Possible Attacks in the Graphics Pipeline
Figure 1 shows a simple abstraction of the graphics pipeline for
purposes of identifying possible attacks to recover 3D geometry.
We note several places in the pipeline where attacks may occur:
3D model file reverse-engineering. Fig. 1(a). 3D graphics models
are typically distributed to users in data streams such as files in
common file formats. One approach to protecting the data is to
obfuscate or encrypt the data file. If the user has full access to the
data file, such encryptions can be reverse-engineered and broken,
and the 3D geometry data is then completely unprotected.
Tampering with the viewing application. Fig. 1(b). A 3D viewer
application is typically used to display the 3D model and allow for
its manipulation. Techniques such program tracing, memory dumping
, and code replacement are practiced by attackers to obtain access
to data in use by application programs.
Graphics driver tampering. Fig. 1(c). Because the 3D geometry
usually passes through the graphics driver software on its way to
the GPU, the driver is vulnerable to tampering. Attackers can replace
graphics drivers with malicious or instrumented versions to
capture streams of 3D vertex data, for example. Such replacement
drivers are widely distributed for purposes of tracing and debugging
graphics programs.
Reconstruction from the framebuffer. Fig. 1(d). Because the
framebuffer holds the result of the rendered scene, its contents can
be used by sophisticated attackers to reconstruct the model geometry
, using computer vision 3D reconstruction techniques. The
Figure 1: Abstracted graphics pipeline showing possible attack locations
(a-e). These attacks are described in the text.
framebuffer contents may even include depth values for each pixel,
and attackers may have precise control over the rendering parameters
used to create the scene (viewing and projection transformations
, lighting, etc.). This potentially creates a perfect opportunity
for computer vision reconstruction, as the synthetic model data and
controlled parameters do not suffer from the noise, calibration, and
imprecision problems that make robust real world vision with real
sensors very difficult.
Reconstruction from the final image display. Fig. 1(e). Regardless
of whatever protections a graphics system can guarantee
throughout the pipeline, the rendered images finally displayed to
the user are accessible to attackers. Just as audio signals may be
recorded by external devices when sound is played through speakers
, the video signals or images displayed on a computer monitor
may be recorded with a variety of video devices. The images so
gathered may be used as input to computer vision reconstruction
attacks such as those possible when the attacker has access to the
framebuffer itself, though the images may be of degraded quality,
unless a perfect digital video signal (such as DVI) is available.
Techniques for Protecting 3D Graphics
In light of the possible attacks in the graphics pipeline as described
in the previous section, we have considered a number of approaches
for sharing and rendering protected 3D graphics.
Software-only rendering. A 3D graphics viewing system that does
not make use of hardware acceleration may be easier to protect from
the application programmer's point of view. Displaying graphics
with a GPU can require transferring the graphics data in precisely
known and open formats, through a graphics driver and hardware
path that is often out of the programmer's control. A custom 3D
viewing application with software rendering allows the 3D content
distributor to encrypt or obfuscate the data in a specific manner, all
the way through the graphics pipeline until display.
Hybrid hardware/software rendering. Hybrid hardware and software
rendering schemes can be used to take at least some advantage
of hardware accelerated rendering, while benefiting from software
rendering's protections as described above. In one such scheme, a
small but critically important portion of a protected model's geometry
(such as the nose of a face) is rendered in software, while the
rest of the model is rendered normally with the accelerated GPU
hardware. This technique serves as a deterrent to attackers tampering
with the graphics drivers or hardware path, but the two-phase
drawing with readback of the color and depth buffers can incur a
696
performance hit, and may require special treatment to avoid artifacts
on the border of the composition of the two images.
In another hybrid rendering scheme, the 3D geometry is transformed
and per-vertex lighting computations are performed in software
. The depth values computed for each vertex are distorted in
a manner that still preserves the correct relative depth ordering,
while concealing the actual model geometry as much as possible.
The GPU is then used to complete rendering, performing rasteri-zation
, texturing, etc. Such a technique potentially keeps the 3D
vertex stream hidden from attackers, but the distortions of the depth
buffer values may impair certain graphics operations (fog computation
, some shadow techniques), and the geometry may need to be
coarsely depth sorted so that Z-interpolation can still be performed
in a linear space.
Deformations of the geometry. Small deformations in large 2D
images displayed on the Internet are sometimes used as a defense
against image theft; zoomed higher resolution sub-images with
varying deformations cannot be captured and easily reassembled
into a whole. A similar idea can be used with 3D data: subtle 3D
deformations are applied to geometry before the vertices are passed
to the graphics driver. The deformations are chosen so as to vary
smoothly as the view of the model changes, and to prohibit recovery
of the original coordinates by averaging the deformations over
time. Even if an attacker is able to access the stream of 3D data after
it is deformed, they will encounter great difficulty reconstructing
a high-resolution version of the whole model due to the distortions
that have been introduced.
Hardware decryption in the GPU. One sound approach to providing
for protected 3D graphics is to encrypt the 3D model data with
public-key encryption at creation time, and then implement custom
GPUs that accept encrypted data, and perform on-chip decryption
and rendering. Additional system-level protections would need to
be implemented to prevent readback of framebuffer and other video
memory, and to place potential restrictions on the command stream
sent to the GPU, in order to prevent recovery of the 3D data.
Image-based rendering. Since our goal is to protect the 3D geometry
of graphic models, one technique is to distribute the models
using image-based representations, which do not explicitly include
the complete geometry data. Examples of such representations
include light fields and Lumigraphs [Levoy and Hanrahan
1996; Gortler et al. 1996], both of which are highly amenable to
interactive display.
Remote rendering. A final approach to secure 3D graphics is to
retain the 3D model data on a secure server, under the control of
the content owner, and pass only 2D rendered images of the models
back to client requests. Very low-resolution versions of the models,
for which piracy is not a concern, can be distributed with special
client programs to allow for interactive performance during manipulation
of the 3D model. This method relies on good network
bandwidth between the client and server, and may require significant
server resources to do the rendering for all client requests, but
it is vulnerable primarily only to reconstruction attacks.
Discussion. We have experimented with several of the 3D model
protection approaches described above. For example, our first protected
3D model viewer was an encrypted version of the "QS-plat"
[Rusinkiewicz and Levoy 2000] point-based rendering system
, which omits geometric connectivity information.
The 3D
model files were encrypted using a strong symmetric block cipher
scheme, and the decryption key was hidden in a heavily obfuscated
3D model viewer program, using modern program obfuscation
techniques [Collberg and Thomborson 2000]. Vertex data was
decrypted on demand during rendering, so that only a very small
portion of the decrypted model was ever in memory, and only software
rendering modes were used.
Unfortunately, systems such as this ultimately rely on "security
through obfuscation," which is theoretically unsound from a computer
security point of view. Given enough time and resources, an
attacker will be able to discover the embedded encryption key or
otherwise reverse-engineer the protections on the 3D data. For this
reason, any of the 3D graphics protection techniques that make the
actual 3D data available to potential attackers in software can be
broken [Schneier 2000]. It is possible that future "trusted comput-ing"
platforms for general purpose computers will be available that
make software tampering difficult or impossible, but such systems
are not widely deployed today. Similarly, the idea of a GPU with
decryption capability has theoretical merit, but it will be some years
before such hardware is widely available for standard PC computing
environments, if ever.
Thus, for providing practical, robust, anti-piracy protections for 3D
data, we gave strongest consideration to purely image-based representations
and to remote rendering. Distributing light fields at
the high resolutions necessary would involve huge, unwieldy file
sizes, would not allow for any geometric operations on the data
(such as surface measurements performed by archaeologists), and
would still give attackers unlimited access to the light field for purposes
of performing 3D reconstruction attacks using computer vision
algorithms. For these reasons, we finally concluded that the
last technique, remote rendering, offers the best solution for protecting
interactive 3D graphics content.
Remote rendering has been used before in networked environments
for 3D visualization, although we are not aware of a system specifically
designed to use remote rendering for purposes of security
and 3D content protection. Remote rendering systems have been
previously implemented to take advantage of off-site specialized
rendering capabilities not available in client systems, such as intensive
volume rendering [Engel et al. 2000], and researchers have
developed special algorithmic approaches to support efficient distribution
of rendering loads and data transmission between rendering
servers and clients [Levoy 1995; Yoon and Neumann 2000].
Remote rendering of 2D graphical content is common for Internet
services such as online map sites; only small portions of the whole
database are viewed by users at one time, and protection of the entire
2D data corpus from theft via image harvesting may be a factor
in the design of these systems.
Remote Rendering System
To test our ideas for providing controlled, protected interactive access
to collections of 3D graphics models, we have implemented
a remote rendering system with a client-server architecture, as described
below.
4.1
Client Description
Users of our protected graphics system employ a specially-designed
3D viewing program to interactively view protected 3D content
.
This client program is implemented as an OpenGL and
wxWindows-based 3D viewer, with menus and GUI dialogs to control
various viewing and networking parameters (Figure 2). The
client program includes very low-resolution, decimated versions of
the 3D models, which can be interactively rotated, zoomed, and re-lit
by the user in real-time. When the user stops manipulating the
low-resolution model, detected via a "mouse up" event, the client
program queries the remote rendering server via the network for a
697
Figure 2: Screenshot of the client program.
high-resolution rendered image corresponding to the selected rendering
parameters. These parameters include the 3D model name,
viewpoint position and orientation, and lighting conditions. When
the server passes the rendered image back to the client program, it
replaces the low-resolution rendering seen by the user (Figure 3).
On computer networks with reasonably low latencies, the user thus
has the impression of manipulating a high-resolution version of
the model. In typical usage for cultural heritage artifacts, we use
models with approximately 10,000 polygons for the low resolution
version, whereas the server-side models often contain tens of millions
polygons. Such low-resolution model complexities are of little
value to potential thieves, yet still provide enough clues for the
user to navigate. The client viewer could be further extended to
cache the most recent images returned from the server and projec-tively
texture map them onto the low-resolution model as long as
they remain valid during subsequent rotation and zooming actions.
4.2
Server Description
The remote rendering server receives rendering requests from
users' client programs, renders corresponding images, and passes
them back to the clients. The rendering server is implemented as
a module running under the Apache 2.0 HTTP Server; as such,
the module communicates with client programs using the standard
HTTP protocol, and takes advantage of the wide variety of access
protection and monitoring tools built into Apache. The rendering
server module is based upon the FastCGI Apache module, and allows
for multiple rendering processes to be spread across any number
of server hardware nodes.
As render requests are received from clients, the rendering server
checks their validity and dispatches the valid requests to a GPU for
OpenGL hardware-accelerated rendering. The rendered images are
read back from the framebuffer, compressed using JPEG compression
, and returned to the client. If multiple requests from the same
client are pending (such as if the user rapidly changes views while
on a slow network), earlier requests are discarded, and only the
most recent is rendered. The server uses level-of-detail techniques
to speed the rendering of highly complex models, and lower level-of
-detail renderings can be used during times of high server load
to maintain high throughput rates. In practice, an individual server
node with a Pentium 4 CPU and an NVIDIA GeForce4 video card
can handle a maximum of 8 typical client requests per second; the
Figure 3: Client-side low resolution (left) and server-side high resolution
(right) model renderings.
bottlenecks are in the rendering and readback (about 100 milliseconds
), and in the JPEG compression (approximately 25 milliseconds
). Incoming request sizes are about 700 bytes each, and the
images returned from our deployed servers average 30 kB per request
.
4.3
Server Defenses
In Section 2, we enumerated several possible places in the graphics
pipeline that an attacker could steal 3D graphics data. The benefit of
using remote rendering is that it leaves only 3D reconstruction from
2D images in the framebuffer or display device as possible attacks.
General 3D reconstruction from images constitutes a very difficult
computer vision problem, as evidenced by the great amount of research
effort being expended to design and build robust computer
vision systems. However, synthetic 3D graphics renderings can be
particularly susceptible to reconstruction because the attacker may
be able to exactly specify the parameters used to create the images,
there is a low human cost to harvest a large number of images, and
synthetic images are potentially perfect, with no sensor noise or
miscalibration errors. Thus, it is still necessary to defend the remote
rendering system from reconstruction attacks; below, we describe a
number of such defenses that we have implemented in combination
for our server.
Session-based defenses. Client programs that access the remote
rendering system are uniquely identified during the course of a usage
session. This allows the server to monitor and track the specific
sequence of rendering requests made by each client. Automatic
analysis of the server logs allows suspicious request streams to be
classified, such as an unusually high number of requests per unit
time, or a particular pattern of requests that is indicative of an image
harvesting program. High quality computer vision reconstructions
often require a large number of images that densely sample
the space of possible views, so we are able to effectively identify
such access patterns and terminate service to those clients. We can
optionally require recurrent user authentication in order to further
deter some image harvesting attacks, although a coalition of users
mounting a low-rate distributed attack from multiple IP addresses
could still defeat such session-based defenses.
Obfuscation. Although we do not rely on obfuscation to protect the
3D model data, we do use obfuscation techniques on the client side
of the system to discourage and slow down certain attacks. The
low-resolution models that are distributed with the client viewer
program are encrypted using an RC4-variant stream cipher, and the
keys are embedded in the viewer and heavily obfuscated. The rendering
request messages sent from the client to the server are also
encrypted with heavily obfuscated keys. These encryptions simply
serve as another line of defense; even if they were broken, attackers
would still not be able to gain access to the high resolution 3D data
except through reconstruction from 2D images.
698
Limitations on valid rendering requests. As a further defense,
we provide the capability in our client and remote server to constrain
the viewing conditions. Some models may have particular
"stayout" regions defined that disallow certain viewing and lighting
angles, thus keeping attackers from being able to reconstruct a
complete model. For the particular purpose of defending against the
enumeration attacks described in Section 5.1, we put restrictions on
the class of projection transformations allowed to be requested by
users (requiring a perspective projection with particular fixed field
of view and near and far planes), and we prevent viewpoints within
a small offset of the model surface.
Perturbations and distortions. Passive 3D computer vision reconstructions
of real-world objects from real-world images are usually
of relatively poor quality compared to the original object. This failure
inspires the belief that we can protect our synthetically rendered
models from reconstruction by introducing into the images the same
types of obstacles known to plague vision algorithms. The particular
perturbations and distortions that we use are described below;
we apply these defenses to the images only to the degree that they
do not distract the user viewing the models. Additionally, these defenses
are applied in a pseudorandomly generated manner for each
different rendering request, so that attackers cannot systematically
determine and reverse their effects, even if the specific form of the
defenses applied is known (such as if the source code for the rendering
server is available). Rendering requests with identical parameters
are mapped to the same set of perturbations, in order to
deter attacks which attempt to defeat these defenses by averaging
multiple images obtained under the same viewing conditions.
Perturbed viewing parameters We pseudorandomly introduce
subtle perturbations into the view transformation matrix
for the images rendered by the server; these perturbations
have the effect of slightly rotating, translating, scaling, and
shearing the model. The range of these distortions is bounded
such that no point in the rendered image is further than either
m
object space units or n pixels from its corresponding point
in an unperturbed view. In practice, we generally set m proportional
to the size of the model's geometry being protected,
and use values of n = 15 pixels, as experience has shown that
users can be distracted by larger shifts between consecutively
displayed images.
Perturbed lighting parameters We pseudorandomly introduce
subtle perturbations into the lighting parameters used
to render the images; these perturbations include modifying
the lighting direction specified in the client request, as well
as addition of randomly changing secondary lighting to illuminate
the model. Users are somewhat sensitive to shifts in
the overall scene intensity and shading, so the primary light
direction perturbations used are generally fairly small (maximum
of 10
for typical models, which are rendered using the
OpenGL local lighting model).
High-frequency noise added to the images We introduce
two types of high-frequency noise artifacts into the rendered
images. The first, JPEG artifacts, are a convenient result of
the compression scheme applied to the images returned from
the server. At high compression levels (we use a maximum
libjpeg quality factor of 50), the quantization of DCT coefficients
used in JPEG compression creates "blocking" discontinuities
in the images, and adds noise in areas of sharp contrast.
These artifacts create problems for low-level computer vision
image processing algorithms, while the design of JPEG compression
specifically seeks to minimize the overall perceptual
loss of image quality for human users.
Additionally, we add pseudorandomly generated monochromatic
Gaussian noise to the images, implemented efficiently
by blending noise textures during hardware rendering on the
server. The added noise defends against computer vision attacks
by making background segmentation more difficult, and
by breaking up the highly regular shading patterns of the synthetic
renderings. Interestingly, users are not generally distracted
by the added noise, but have even commented that the
rendered models often appear "more realistic" with the high-frequency
variations caused by the noise. One drawback of
the added noise is that the increased entropy of the images can
result in significantly larger compressed file sizes; we address
this in part by primarily limiting the application of noise to the
non-background regions of the image via stenciled rendering.
Low-frequency image distortions Just as real computer vision
lens and sensor systems sometimes suffer from image
distortions due to miscalibration, we can effectively simulate
and extend these distortions in the rendering server. Subtle
non-linear radial distortions, pinching, and low-frequency
waves can be efficiently implemented with vertex shaders, or
with two-pass rendering of the image as a texture onto a non-uniform
mesh, accelerated with the "render to texture" capabilities
of modern graphics hardware.
Due to the variety of random perturbations and distortions that are
applied to the images returned from the rendering server, there is
a risk of distracting the user, as the rendered 3D model exhibits
changes from frame to frame, even when the user makes very minor
adjustments to the view. However, we have found that the
brief switch to the lower resolution model in between display of the
high resolution perturbed images, inherent to our remote rendering
scheme, very effectively masks these changes. This masking of
changes is attributed to the visual perception phenomenon known
as change blindness [Simons and Levin 1997], in which significant
changes occurring in full view are not noticed due to a brief disruption
in visual continuity, such as a "flicker" introduced between
successive images.
Reconstruction Attacks
In this section we consider several classes of attacks, in which sets
of images may be gathered from our remote rendering server to
make 3D reconstructions of the model, and we analyze their efficacy
against the countermeasures we have implemented.
5.1
Enumeration Attacks
The rendering server responds to rendering requests from users
specifying the viewing conditions for the rendered images. This
ability for precise specification can be exploited by attackers, as
they can potentially explore the entire 3D model space, using the returned
images to discover the location of the 3D model to any arbitrary
precision. In practice, these attacks involve enumerating many
small cells in a voxel grid, and testing each such voxel to determine
intersection with the remote high-resolution model's surface; thus
we term them enumeration attacks. Once this enumeration process
is complete, occupied cells of the voxel grid are exported as a point
cloud and then input to a surface reconstruction algorithm.
In the plane sweep enumeration attack, the view frustum is specified
as a rectangular, one-voxel-thick "plane," and is swept over the
model (Figure 4(a)). Each requested image represents one slice of
the model's surface, and each pixel of each image corresponds to a
single voxel. A simple comparison of each image pixel against the
expected background color is performed to determine whether that
699
(a)
(b)
Figure 4: Enumeration Attacks: (a) the plane sweep enumeration
attack sweeps a one-voxel thick orthographic view frustum over
the model, (b) the near plane sweep enumeration attack sweeps the
viewpoint over the model, marking voxels where the model surface
is clipped by the near plane.
pixel is a model surface or background pixel. Sweeps from multiple
view angles (such as the six faces of the voxels) are done to catch
backfacing polygons that may not be visible from a particular angle.
These redundant multiple sweeps also allow the attacker to be liberal
about ignoring questionable background pixels that may occur,
such as if low-amplitude background noise or JPEG compression is
being used as a defense on the server.
Our experiments demonstrate that the remote model can be efficiently
reconstructed against a defenseless server using this attack
(Figure 5(b)). Perturbing viewing parameters can be an effective
defense against this attack; the maximum reconstruction resolution
will be limited by the maximum relative displacement that an individual
model surface point undergoes. Figure 5(c) shows the results
of a reconstruction attempt against a server pseudorandomly
perturbing the viewing direction by up to 0.3
in the returned images
. Since plane sweep enumeration relies on the correspondence
between image pixels and voxels, image warps can also be effective
as a defense. The large number of remote image requests required
for plane sweep enumeration (O(n) requests for an n n n
voxel grid) and the unusual request parameters may look suspicious
and trigger the rendering server log analysis monitors. Plane sweep
enumeration attacks can be completely nullified by limiting user
control of the view frustum parameters, which we implement in our
system and use for valuable models.
Another enumeration attack, near plane sweep enumeration, involves
sweeping the viewpoint (and thus the near plane) over the
model, checking when the model surface is clipped by the near
plane and marking voxels when this happens (Figure 4(b)). The
attacker knows that the near plane has clipped the model when a
pixel previously containing the model surface begins to be classified
as the background. In order to determine which voxel each
image pixel corresponds to, the attacker must know two related parameters
: the distance between the viewpoint position and the near
plane, and the field of view.
These parameters can be easily discovered. The near plane distance
can be determined by first obtaining the exact location of one
feature point on the model surface through triangulation of multiple
rendering requests and then moving the viewpoint slowly toward
that point on the model. When the near plane clips the feature
point, the distance between that point and the view position equals
the near plane distance. The horizontal and vertical field of view
angles can be obtained by moving the viewpoint slowly toward the
model surface, stopping when any surface point becomes clipped by
the near plane. The viewpoint is then moved a small amount perpendicular
to its original direction of motion such that the clipped
point moves slightly relative to the view but stays on the new image
(near plane). Since the near plane distance has already been
(a)
(b)
(c)
(d)
Figure 5: 3D reconstruction results from enumeration attacks:
(a) original 3D model, (b) plane sweep attack against defenseless
server (6 passes, 3,168 total rendered images), (c) plane sweep attack
against 0.3
viewing direction perturbation defense (6 passes,
3,168 total rendered images), (d) near plane sweep attack against
defenseless server (6 passes, 7,952 total rendered images).
obtained, the field of view angle (horizontal or vertical depending
on direction of motion) can be obtained from the relative motion of
the clipped point across the image.
Because the near plane is usually small compared to the dimensions
of the model, many sweeps must be tiled in order to attain full coverage
. Sweeps must also be made in several directions to ensure
that all model faces are seen. Because this attack relies on seeing
the background to determine when the near plane has clipped a surface
, concave model geometries will present a problem for surface
detection. Although sweeps from multiple directions will help, this
problem is not completely avoidable. Figure 5(d) illustrates this
problem, showing a case in which six sweeps have not fully captured
all the surface geometry.
Viewing parameter perturbations and image warps will nearly destroy
the effectiveness of near plane sweep enumeration attacks, as
they can make it very difficult to determine where the surface lies
and where it does not near silhouette edges (pixels near these edges
will change erratically between surface and background). The most
solid defense against this attack is to prevent views within a certain
small offset of the model surface. This defense, which we use
in our system to protect valuable models, prevents the near plane
from ever clipping the model surface and thereby completely nullifies
this attack.
5.2
Shape-from-silhouette Attacks
Shape-from-silhouette [Slabaugh et al. 2001] is one well studied,
robust technique for extracting a 3D model from a set of images.
The method consists of segmenting the object pixels from the background
in each image, then intersecting in space their resulting extended
truncated silhouettes, and finally computing the surface of
the resulting shape. The main limitation of this technique is that
only a visual hull [Laurentini 1994] of the 3D shape can be recovered
; the line-concave parts of the model are beyond the capabilities
of the reconstruction. Thus, the effectiveness of this attack depends
on the specific geometric characteristics of the object; the high-resolution
3D models that we target often have many concavities
that are difficult or impossible to fully recover using shape-from-silhouette
. However, this attack may also be of use to attackers
700
Figure 6: The 160 viewpoints used to reconstruct the model with a
shape-from-silhouette attack; results are shown in Figure 7.
to obtain a coarse, low-resolution version of the model, if they are
unable to break through the obfuscation protections we use for the
low-resolution models distributed with the client.
To measure the potential of a shape-from-silhouette attack against
our protected graphics system, we have conducted reconstruction
experiments on a 3D model of the David as served via the rendering
server, using a shape-from-silhouette implementation described
in [Tarini et al. 2002]. With all server defenses disabled, 160 images
were harvested from a variety of viewpoints around the model
(Figure 6); these viewpoints were selected incrementally, with later
viewpoints chosen to refine the reconstruction accuracy as measured
during the process. The resulting 3D reconstruction is shown
in Figure 7(b).
Several of the perturbation and distortion defenses implemented in
our server are effective against the shape-from-silhouette attack.
Results from experiments showing the reconstructed model quality
with server defenses independently enabled are shown in Figures
7(c-g). Small perturbations in the viewing parameters were
particularly effective at decreasing the quality of the reconstructed
model, as would be expected; Niem [1997] performed an error analysis
of silhouette-based modeling techniques and showed the linear
relationship between error in the estimation of the view position
and error in the resulting reconstruction. Perturbations in the images
returned from the server, such as radial distortion and small
random shifts, were also effective. Combining the different perturbation
defenses, as they are implemented in our remote rendering
system, makes for further deterioration of the reconstructed model
quality (Figure 7(h)).
High frequency noise and JPEG defenses in the server images can
increase the difficulty of segmenting the object from the background
.
However, shape-from-silhouette software implementa-tions
with specially tuned image processing operations can take the
noise characteristics into account to help classify pixels accurately.
The intersection stage of shape-from-silhouette reconstruction algorithms
makes them innately robust with respect to background
pixels misclassified as foreground.
5.3
Stereo Correspondence-based Attacks
Stereo reconstruction is another well known 3D computer vision
technique. Stereo pairs of similarly neighborhooded pixels are detected
, and the position of the corresponding point on the 3D surface
is found via the intersection of epipolar lines. Of particular
relevance to our remote rendering system, Debevec et al. [1996]
showed that the reconstruction task can be made easier and more
accurate if an approximate low resolution model is available, by
warping the images over it before performing the stereo matching.
(a) E = 0
(b) E = 4.5
(c) E = 13.5
(d) E = 45.5
(e) E = 11.6
(f) E = 9.3
(g) E = 16.2
(h) E = 26.6
Figure 7: Performance of shape-from-silhouette reconstructions
against various server defenses. Error values (E) measure the mean
surface distance (mm) from the 5m tall original model. Top row:
(a) original model, (b) reconstruction from defenseless server, reconstruction
with (c) 0.5
and (d) 2.0
perturbations of the view
direction. Bottom row: (e) reconstruction with a random image offset
of 4 pixels, with (f) 1.2% and (g) 2.5% radial image distortion,
and (h) reconstruction against combined defenses (1.0
view perturbation
, 2 pixel random offset, and 1.2% radial image distortion).
Ultimately, however, stereo correspondence techniques usually rely
on matching detailed, high-frequency features in order to yield
high-resolution reconstruction results. The smoothly shaded 3D
computer models generated by laser scanning that we share via our
remote rendering system thus present significant problems to basic
two-frame stereo matching algorithms. When we add in the server
defenses such as image-space high frequency noise, and slight perturbations
in the viewing and lighting parameters, the stereo matching
task becomes even more ill-posed. Other stereo research such as
[Scharstein and Szeliski 2002] also reports great difficulty in stereo
reconstruction of noise-contaminated, low-texture synthetic scenes.
Were we to distribute 3D models with high resolution textures applied
to their surfaces, stereo correspondence methods may be a
more effective attack.
5.4
Shape-from-shading Attacks
Shape-from-shading attacks represent another family of computer
vision techniques for reconstructing the shape of a 3D object (see
[Zhang et al. 1999] for a survey). The primary attack on our remote
rendering system that we consider in this class involves first
701
(a) E = 0
(b) E = 1.9
(c) E = 1.0
(d) E = 1.1
(e) E = 1.7
(f) E = 2.0
Figure 8: Performance of shape-from-shading reconstruction attacks
. Error values (E) measure the mean surface distance (mm)
from the original model. Top row: (a) original model, (b) low-resolution
base mesh, (c) reconstruction from defenseless server.
Bottom row: reconstruction results against (d) high-frequency image
noise, (e) complicated lighting model (3 lights), and (f) viewing
angle perturbation (up to 1.0
) defenses.
obtaining several images from the same viewpoint under varying,
known lighting conditions. Then, using photometric stereo methods
, a normal is computed for each pixel by solving a system of
rendering equations. The resulting normal map can be registered
and applied to an available approximate 3D geometry, such as the
low-resolution model used by the client, or one obtained from another
reconstruction technique such as shape-from-silhouette.
This coarse normal-mapped model itself may be of value to some
attackers: when rendered it will show convincing 3D high frequency
details that can be shaded under new lighting conditions,
though with artifacts at silhouettes. However, the primary purpose
of our system is to protect the high-resolution 3D geometry, which
if stolen could be used maliciously for shape analysis or to create
replicas. Thus, a greater risk is posed if the normal map is integrated
by the attacker to compute a displacement map, and the results are
used to displace a refined version of the low-resolution model mesh.
Following this procedure with images harvested from a defenseless
remote rendering server and using a low-resolution client model,
we were able to successfully reconstruct a high-resolution 3D
model. The results shown in Figure 8(c) depict a reconstruction
of the David's head produced from 200 1600x1114 pixel images
taken from 10 viewpoints, with 20 lighting positions used at each
viewpoint, assuming a known, single-illuminant OpenGL lighting
model and using a 10,000 polygon low-resolution model (Fig. 8(b))
of the whole statue.
Some of the rendering server defenses, such as adding high-frequency
noise to the images, can be compensated for by attackers
by simply adding enough input images to increase the robustness
of the photometric stereo solution step (although harvesting
too many images will eventually trigger the rendering server log
analysis monitors). Figure 8(d) shows the high quality reconstruction
result possible when only random Gaussian noise is used as
a defense. More effective defenses against shape-from-shading attacks
include viewing and lighting perturbations and low-frequency
image distortions, which can make it difficult to precisely register
images onto the low-resolution model, and can disrupt the photometric
stereo solution step without a large number of aligned input
images. Figure 8(e) shows a diminished quality reconstruction
when the rendering server complicates the lighting model by using
3 perturbed light sources with a Phong component unknown to
the attacker, and Figure 8(f) shows the significant loss of geometric
detail in the reconstruction when the server randomly perturbs the
viewing direction by up to 1.0
(note that the reconstruction error
exceeds that of the starting base mesh).
The quality of the base mesh is an important determinant in the success
of this particular attack. For example, repeating the experiment
of Figure 8 with a more accurate base mesh of 30,000 polygons
yields results of E = 0.8, E = 0.6, and E = 0.7 for the conditions
of Figures 8(b), 8(c), and 8(e), respectively. This reliance on an
accurate low-resolution base mesh for the 3D model reconstruction
is a potential weak point of the attack; attackers may be deterred
by the effort required to reverse-engineer the protections guarding
the low-resolution model or to reconstruct an acceptable base mesh
from harvested images using another technique.
5.5
Discussion
Because we know of no single mechanism for guaranteeing the security
of 3D content delivered through a rendering server, we have
instead taken a systems-based approach, implementing multiple defenses
and using them in combination. Moreover, we know of no
formalism for rigorously analyzing the security provided by our defenses
; the reconstruction attacks that we have empirically considered
here are merely representative of the possible threats.
Of the reconstruction attacks we have experimented with so far, the
shape-from-shading approach has yielded the best results against
our defended rendering server.
Enumeration attacks are easily
foiled when the user's control over the viewpoint and view frustum
is constrained, pure shape-from-silhouette methods are limited
to reconstructing a visual hull, and two-frame stereo algorithms rely
on determining accurate correspondences which is difficult with the
synthetic, untextured models we are attempting to protect. Attackers
could improve the results of the shape-from-shading algorithm
against our perturbation defenses by explicitly modeling the distortions
and trying to take them into account in the optimization step,
or alternatively by attempting to align the images by interactively
establishing point to point correspondences or using an automatic
technique such as [Lensch et al. 2001].
Such procedures for explicitly modeling the server defenses, or correcting
for them via manual specification of correspondences, are
applicable to any style of reconstruction attempt. To combat these
attacks, we must rely on the combined discouraging effect of multiple
defenses running simultaneously, which increases the number of
degrees of freedom of perturbation to a level that would be difficult
and time-consuming to overcome. Some of our rendering server
defenses, such as the lighting model and non-linear image distortions
, can be increased arbitrarily in their complexity. Likewise, the
magnitude of server defense perturbations can be increased with a
corresponding decrease in the fidelity of the rendered images.
Ultimately, no fixed set of defenses is bulletproof against a sophisticated
, malicious attacker with enough resources at their disposal
, and one is inevitably led to an "arms race" between attacks
and countermeasures such as we have implemented. As the expense
required to overcome our remote rendering server defenses
becomes greater, determined attackers may instead turn to reaching
their piracy goals via non-reconstruction-based methods beyond the
scope of this paper, such as computer network intrusion or exploitation
of non-technical human factors.
702
Results and Future Work
A prototype of our remote rendering system (ScanView, available
at http://graphics.stanford.edu/software/scanview/ ) has been
deployed to share 3D models from a major cultural heritage archive,
the Digital Michelangelo Project [Levoy et al. 2000], as well as
other collections of archaeological artifacts that require protected
usage. In the several months since becoming publically available,
more than 4,000 users have installed the client program on their personal
computers and accessed the remote servers to view the protected
3D models. The users have included art students, art scholars
, art enthusiasts, and sculptors examining high-resolution artworks
, as well as archaeologists examining particular artifacts. Few
of these individuals would have qualified under the strict guidelines
required to obtain completely unrestricted access to the models, so
the protected remote rendering system has enabled large, entirely
new groups of users access to 3D graphical models for professional
study and enjoyment.
Reports from users of the system have been uniformly positive
and enthusiastic. Fetching high-resolution renderings over intercontinental
broadband Internet connections takes less than 2 seconds
of latency, while fast continental connections generally experience
latencies dominated by the rendering server's processing time
(around 150 ms). The rendering server architecture can scale up to
support an arbitrary number of requests per second by adding additional
CPU and GPU nodes, and rendering servers can be installed
at distributed locations around the world to reduce intercontinental
latencies if desired.
Our log analysis defenses have detected multiple episodes of system
users attempting to harvest large sets of images from the server
for purposes of later 3D reconstruction attempts, though these incidents
were determined to be non-malicious. In general, the monitoring
capabilities of a remote rendering server are useful for reasons
beyond just security, as the server logs provide complete accounts
of all usage of the 3D models in the archive, which can be
valuable information for archive managers to gauge popularity of
individual models and understand user interaction patterns.
Our plans for future work include further investigation of computer
vision techniques that address 3D reconstruction of synthetic data
under antagonistic conditions, and analysis of their efficacy against
the various rendering server defenses. More sophisticated extensions
to the basic vision approaches described above, such as multi-view
stereo algorithms, and robust hybrid vision algorithms which
combine the strengths of different reconstruction techniques, can
present difficult challenges to protecting the models. Another direction
of research is to consider how to allow users a greater degree
of geometric analysis of the protected 3D models without further
exposing the data to theft; scholarly and professional users have
expressed interest in measuring distances and plotting profiles of
3D objects for analytical purposes beyond the simple 3D viewing
supported in the current system. Finally, we are continuing to investigate
alternative approaches to protecting 3D graphics, designing
specialized systems which make data security a priority while
potentially sacrificing some general purpose computing platform
capabilities. The GPU decryption scheme described herein, for example
, is one such idea that may be appropriate for console devices
and other custom graphics systems.
Acknowledgements
We thank Kurt Akeley, Sean Anderson,
Jonathan Berger, Dan Boneh, Ian Buck, James Davis, Pat Hanrahan
, Hughes Hoppe, David Kirk, Matthew Papakipos, Nick
Triantos, and the anonymous reviewers for their useful feedback,
and Szymon Rusinkiewicz for sharing code. This work has been
supported in part by NSF contract IIS0113427, the Max Planck
Center for Visual Computing and Communication, and the EU IST-2001
-32641 ViHAP3D Project.
References
C
OLLBERG
, C.,
AND
T
HOMBORSON
, C. 2000. Watermarking, tamper-proofing
, and obfuscation: Tools for software protection. Tech. Rep.
170, Dept. of Computer Science, The University of Auckland.
D
EBEVEC
, P., T
AYLOR
, C.,
AND
M
ALIK
, J. 1996. Modeling and rendering
architecture from photographs: A hybrid geometry- and image-based
approach. In Proc. of ACM SIGGRAPH 96, 1120.
E
NGEL
, K., H
ASTREITER
, P., T
OMANDL
, B., E
BERHARDT
, K.,
AND
E
RTL
, T. 2000. Combining local and remote visualization techniques for
interactive volume rendering in medical applications. In Proc. of IEEE
Visualization 2000
, 449452.
G
ORTLER
, S., G
RZESZCZUK
, R., S
ZELISKI
, R.,
AND
C
OHEN
, M. F.
1996. The lumigraph. In Proc. of ACM SIGGRAPH 96, 4354.
L
AURENTINI
, A. 1994. The visual hull concept for silhouette-based image
understanding. IEEE Trans. on Pattern Analysis and Machine Intelligence
16
, 2, 150162.
L
ENSCH
, H. P., H
EIDRICH
, W.,
AND
S
EIDEL
, H.-P. 2001. A silhouette-based
algorithm for texture registration and stitching. Graphical Models
63
, 245262.
L
EVOY
, M.,
AND
H
ANRAHAN
, P. 1996. Light field rendering. In Proc. of
ACM SIGGRAPH 96
, 3142.
L
EVOY
, M., P
ULLI
, K., C
URLESS
, B., R
USINKIEWICZ
, S., K
OLLER
, D.,
P
EREIRA
, L., G
INZTON
, M., A
NDERSON
, S., D
AVIS
, J., G
INSBERG
,
J., S
HADE
, J.,
AND
F
ULK
, D. 2000. The digital michelangelo project.
In Proc. of ACM SIGGRAPH 2000, 131144.
L
EVOY
, M. 1995. Polygon-assisted jpeg and mpeg compression of synthetic
images. In Proc. of ACM SIGGRAPH 95, 2128.
N
IEM
, W. 1997. Error analysis for silhouette-based 3d shape estimation
from multiple views. In International Workshop on Synthetic-Natural
Hybrid Coding and 3D Imaging
.
O
HBUCHI
, R., M
UKAIYAMA
, A.,
AND
T
AKAHASHI
, S.
2002.
A
frequency-domain approach to watermarking 3d shapes.
Computer
Graphics Forum 21
, 3.
P
RAUN
, E., H
OPPE
, H.,
AND
F
INKELSTEIN
, A. 1999. Robust mesh watermarking
. In Proc. of ACM SIGGRAPH 99, 4956.
R
ESSLER
, S., 2001.
Web3d security discussion.
Online article:
http://web3d.about.com/library/weekly/aa013101a.htm
.
R
USINKIEWICZ
, S.,
AND
L
EVOY
, M. 2000. QSplat: A multiresolution
point rendering system for large meshes. In Proc. of ACM SIGGRAPH
2000
, 343352.
S
CHARSTEIN
, D.,
AND
S
ZELISKI
, R. 2002. A taxonomy and evaluation of
dense two-frame stereo correspondence algorithms. International Journal
of Computer Vision 47
, 13, 742.
S
CHNEIER
, B. 2000. The fallacy of trusted client software. Information
Security
(August).
S
IMONS
, D.,
AND
L
EVIN
, D. 1997. Change blindness. Trends in Cognitive
Sciences 1
, 7, 261267.
S
LABAUGH
, G., C
ULBERTSON
, B., M
ALZBENDER
, T.,
AND
S
CHAFER
,
R. 2001. A survey of methods for volumetric scene reconstruction from
photographs. In Proc. of the Joint IEEE TCVG and Eurographics Workshop
(VolumeGraphics-01)
, Springer-Verlag, 81100.
S
TANFORD
D
IGITAL
F
ORMA
U
RBIS
P
ROJECT
,
2004.
http://formaurbis.stanford.edu.
T
ARINI
, M., C
ALLIERI
, M., M
ONTANI
, C., R
OCCHINI
, C., O
LSSON
, K.,
AND
P
ERSSON
, T. 2002. Marching intersections: An efficient approach
to shape-from-silhouette. In Proceedings of the Conference on Vision,
Modeling, and Visualization (VMV 2002)
, 255262.
Y
OON
, I.,
AND
N
EUMANN
, U. 2000. Web-based remote rendering with
IBRAC. Computer Graphics Forum 19, 3.
Z
HANG
, R., T
SAI
, P.-S., C
RYER
, J. E.,
AND
S
HAH
, M. 1999. Shape from
shading: A survey. IEEE Transactions on Pattern Analysis and Machine
Intelligence 21
, 8, 690706.
703 | digital rights management;remote rendering;security;3D models |
154 | Providing the Basis for Human-Robot-Interaction: A Multi-Modal Attention System for a Mobile Robot | In order to enable the widespread use of robots in home and office environments, systems with natural interaction capabilities have to be developed. A prerequisite for natural interaction is the robot's ability to automatically recognize when and how long a person's attention is directed towards it for communication. As in open environments several persons can be present simultaneously, the detection of the communication partner is of particular importance. In this paper we present an attention system for a mobile robot which enables the robot to shift its attention to the person of interest and to maintain attention during interaction. Our approach is based on a method for multi-modal person tracking which uses a pan-tilt camera for face recognition, two microphones for sound source localization, and a laser range finder for leg detection. Shifting of attention is realized by turning the camera into the direction of the person which is currently speaking. From the orientation of the head it is decided whether the speaker addresses the robot. The performance of the proposed approach is demonstrated with an evaluation. In addition, qualitative results from the performance of the robot at the exhibition part of the ICVS'03 are provided. | INTRODUCTION
A prerequisite for the successful application of mobile service
robots in home and office environments is the development of systems
with natural human-robot-interfaces. Much research focuses
Figure 1: Even in crowded situations (here at the ICVS'03) the
mobile robot BIRON is able to robustly track persons and shift
its attention to the speaker.
on the communication process itself, e.g. speaker-independent
speech recognition or robust dialog systems. In typical tests of such
human-machine interfaces, the presence and position of the communication
partner is known beforehand as the user either wears a
close-talking microphone or stands at a designated position. On a
mobile robot that operates in an environment where several people
are moving around, it is not always obvious for the robot which
of the surrounding persons wants to interact with it. Therefore,
it is necessary to develop techniques that allow a mobile robot to
automatically recognize when and how long a user's attention is
directed towards it for communication.
For this purpose some fundamental abilities of the robot are required
. First of all, it must be able to detect persons in its vicinity
and to track their movements over time in order to differentiate
between persons. In previous work, we have demonstrated how
tracking of persons can be accomplished using a laser range finder
and a pan-tilt color camera [6].
As speech is the most important means of communication for
humans, we extended this framework to incorporate sound source
information for multi-modal person tracking and attention control.
This enables a mobile robot to detect and localize sound sources in
the robot's surroundings and, therfore, to observe humans and to
shift its attention to a person that is likely to communicate with the
robot. The proposed attention system is part of a larger research
effort aimed at building BIRON the Bielefeld Robot Companion.
BIRON has already performed attention control successfully
during several demonstrations. Figure 1 depicts a typical situation
during the exhibition of our mobile robot at the International Conference
on Computer Vision Systems (ICVS) 2003 in Graz.
The paper is organized as follows: At first we discuss approaches
that are related to the detection of communication partners in section
2. Then, in section 3 the robot hardware is presented. Next,
multi-modal person tracking is outlined in section 4, followed by
the explanation of the corresponding perceptual systems in section
5. This is the basis of our approach for the detection of communication
partners explained in section 6. In section 7 an extensive
evaluation of the system is presented. The paper concludes with a
short summary in section 8.
RELATED WORK
As long as artificial systems interact with humans in static setups
the detection of communication partners can be achieved rather easily
. For the interaction with an information kiosk the potential user
has to enter a definite space in front of it (cf. e.g. [14]). In intelligent
rooms usually the configuration of the sensors allows to monitor all
persons involved in a meeting simultaneously (cf. e.g. [18]).
In contrast to these scenarios a mobile robot does not act in a
closed or even controlled environment. A prototypical application
of such a system is its use as a tour guide in scientific laboratories
or museums (cf. e.g. [3]). All humans approaching or passing the
robot have to be considered to be potential communication partners
. In order to circumvent the problem of detecting humans in
an unstructured and potentially changing environment, in the approach
presented in [3] a button on the robot itself has to be pushed
to start the interaction.
Two examples for robots with advanced human-robot interfaces
are SIG [13] and ROBITA [12] which currently demonstrate their
capabilities in research labs. Both use a combination of visual face
recognition and sound source localization for the detection of a person
of interest. SIG's focus of attention is directed towards the
person currently speaking that is either approaching the robot or
standing close to it. In addition to the detection of talking people,
ROBITA is also able to determine the addressee of spoken utterances
. Thus, it can distinguish speech directed towards itself from
utterances spoken to another person. Both robots, SIG and ROBITA
, can give feedback which person is currently considered to be
the communication partner. SIG always turns its complete body towards
the person of interest. ROBITA can use several combinations
of body orientation, head orientation, and eye gaze.
The multi-modal attention system for a mobile robot presented
in this paper is based on face recognition, sound source localization
and leg detection. In the following related work on these topics will
be reviewed.
For human-robot interfaces tracking of the user's face is indispensable
. It provides information about the user's identity, state,
and intent. A first step for any face processing system is to detect
the locations of faces in the robot's camera image. However,
face detection is a challenging task due to variations in scale and
position within the image. In addition, it must be robust to different
lighting conditions and facial expressions. A wide variety of
techniques has been proposed, for example neural networks [15],
deformable templates [23], skin color detection [21], or principle
component analysis (PCA), the so-called Eigenface method [19].
For an overview the interested reader is referred to [22, 9].
In current research on sound or speaker localization mostly microphone
arrays with at least 3 microphones are used. Only a few
approaches employ just one pair of microphones. Fast and robust
techniques for sound (and therefore speaker) localization are
e.g. the Generalized Cross-Correlation Method [11] or the Cross-Powerspectrum
Phase Analysis [8], which both can be applied for
microphone-arrays as well as for only one pair of microphones.
More complex algorithms for speaker localization like spectral separation
and measurement fusion [2] or Linear-Correction Least-Squares
[10] are also very robust and can additionally estimate
the distance and the height of a speaker or separate different audio
sources. Such complex algorithms require more than one pair of
microphones to work adequately and also require substantial processing
power.
In mobile robotics 2D laser range finders are often used, primarily
for robot localization and obstacle avoidance. A laser range
finder can also be applied to detect persons. In the approach presented
in [16] for every object detected in a laser scan features like
diameter, shape, and distance are extracted. Then, fuzzy logic is
used to determine which of the objects are pairs of legs. In [17]
local minima in the range profile are considered to be pairs of legs.
Since other objects (e.g. trash bins) produce patterns similar to persons
, moving objects are distinguished from static objects, too.
ROBOT HARDWARE
Figure 2: The mobile
robot BIRON.
The hardware platform for BIRON is
a Pioneer PeopleBot from ActivMedia
(Fig. 2) with an on-board PC (Pentium
III, 850 MHz) for controlling the motors
and the on-board sensors and for sound
processing. An additional PC (Pentium
III, 500 MHz) inside the robot is used
for image processing.
The two PC's running Linux are
linked with a 100 Mbit Ethernet and the
controller PC is equipped with wireless
Ethernet to enable remote control of the
mobile robot. For the interaction with a
user a 12" touch screen display is provided
on the robot.
A pan-tilt color camera (Sony EVI-D31
) is mounted on top of the robot at a
height of 141 cm for acquiring images of
the upper body part of humans interacting
with the robot. Two AKG far-field
microphones which are usually used for
hands free telephony are located at the
front of the upper platform at a height
of 106 cm, right below the touch screen
display. The distance between the microphones
is 28.1 cm. A SICK laser
range finder is mounted at the front at
a height of approximately 30 cm.
For robot navigation we use the ISR (Intelligent Service Robot)
control software developed at the Center for Autonomous Systems,
KTH, Stockholm [1].
MULTI-MODAL PERSON TRACKING
In order to enable a robot to direct its attention to a specific person
it must be able to distinguish between different persons. Therefore
, it is necessary for the robot to track all persons present as
robustly as possible.
Person tracking with a mobile robot is a highly dynamic task. As
both, the persons tracked and the robot itself might be moving the
sensory perception of the persons is constantly changing. Another
difficulty arises from the fact that a complex object like a person
29
usually cannot be captured completely by a single sensor system
alone. Therefore, we use the sensors presented in section 3 to obtain
different percepts of a person:
The camera is used to recognize faces. From a face detection
step the distance, direction, and height of the observed
person are extracted, while an identification step provides the
identity of the person if it is known to the system beforehand
(see section 5.1).
Stereo microphones are applied to locate sound sources using
a method based on Cross-Powerspectrum Phase Analysis [8].
From the result of the analysis the direction relative to the
robot can be estimated (see section 5.2).
The laser range finder is used to detect legs. In range readings
pairs of legs of a human result in a characteristic pattern
that can be easily detected [6]. From detected legs the distance
and direction of the person relative to the robot can be
extracted (see section 5.3).
The processes which are responsible for processing the data of
these sensors provide information about the same overall object:
the person. Consequently, this data has to be fused. We combine
the information from the different sensors in a multi-modal framework
which is described in the following section.
4.1
Multi-Modal Anchoring
In order to solve the problem of person tracking we apply multi-modal
anchoring [6]. This approach extends the idea of standard
anchoring as proposed in [4]. The goal of anchoring is defined as
establishing connections between processes that work on the level
of abstract representations of objects in the world (symbolic level)
and processes that are responsible for the physical observation of
these objects (sensory level). These connections, called anchors,
must be dynamic, since the same symbol must be connected to new
percepts every time a new observation of the corresponding object
is acquired.
Therefore, in standard anchoring at every time step
, an anchor
contains three elements: a symbol, which is used to denote an object
, a percept of the same object, generated by the corresponding
perceptual system, and a signature, which is meant to provide an
estimate for the values of the observable properties of the object. If
the anchor is grounded at time
, it contains the percept perceived
at
as well as the updated signature. If the object is not observable
at
and therefore the anchor is ungrounded, then no percept is
stored in the anchor but the signature still contains the best available
estimate.
Because standard anchoring only considers the special case of
connecting one symbol to the percepts acquired from one sensor,
the extension to multi-modal anchoring was necessary in order to
handle data from several sensors. Multi-modal anchoring allows
to link the symbolic description of a complex object to different
types of percepts, originating from different perceptual systems. It
enables distributed anchoring of individual percepts from multiple
modalities and copes with different spatio-temporal properties of
the individual percepts. Every part of the complex object which
is captured by one sensor is anchored by a single component anchoring
process. The composition of all component anchors is
realized by a composite anchoring process which establishes the
connection between the symbolic description of the complex object
and the percepts from the individual sensors. In the domain
of person tracking the person itself is the composite object while
its components are face, speech, and legs, respectively. In addition
Signature
list
Fusion
Motion
Composition
Face region
Sound source
Laser legs
Anchoring
Anchoring
Anchoring
person
face
speech
legs
...
name, height,
t
2
t
0
t
1
Anchor
position, etc.
Anchoring of composite object
Person models
Symbols
Percepts
Anchoring of component objects
Figure 3: Multi-modal anchoring of persons.
to standard anchoring, the composite anchoring module requires a
composition model, a motion model, and a fusion model:
The composition model defines the spatial relationships of
the components with respect to the composite object. It is
used in the component anchoring processes to anchor only
those percepts that satisfy the composition model.
The motion model describes the type of motion of the complex
object, and therefore allows to predict its position. Using
the spatial relationships of the composition model, the
position of percepts can be predicted, too. This information
is used by the component anchoring processes in two
ways: 1. If multiple percepts are generated from one perceptual
system the component anchoring process selects the percept
which is closest to the predicted position. 2. If the corresponding
perceptual system receives its data from a steerable
sensor with a limited field of view (e.g. pan-tilt camera), it
turns the sensor into the direction of the predicted position.
The fusion model defines how the perceptual data from the
component anchors has to be combined. It is important to
note, that the processing times of the different perceptual systems
may differ significantly. Therefore, the perceptual data
may not arrive at the composite anchoring process in chronological
order. Consequently, the composite anchor provides a
chronologically sorted list of the fused perceptual data. New
data from the component anchors is inserted in the list, and
all subsequent entries are updated.
The anchoring of a single person is illustrated in Figure 3. It is
based on anchoring the three components legs, face, and speech.
For more details please refer to [6].
4.2
Tracking Multiple Persons
If more than one person has to be tracked simultaneously, several
anchoring processes have to be run in parallel. In this case, multi-modal
anchoring as described in the previous section may lead to
the following conflicts between the individual composite anchoring
processes:
The anchoring processes try to control the pan-tilt unit of the
camera in a contradictory way.
A percept is selected by more than one anchoring process.
30
In order to resolve these problems a supervising module is required,
which grants the access to the pan-tilt camera and controls the selection
of percepts.
The first problem is handled in the following way: The supervising
module restricts the access to the pan-tilt unit of the camera
to only one composite anchoring process at a time. How access is
granted to the processes depends on the intended application. For
the task of detecting communication partners which is presented
in this paper, only the anchoring process corresponding to the currently
selected person of interest controls the pan-tilt unit of the
camera (see section 6).
In order to avoid the second problem, the selection of percepts is
implemented as follows. Instead of selecting a specific percept de-terministically
, every component anchoring process assigns scores
to all percepts rating the proximity to the predicted position. After
all component anchoring processes have assigned scores, the supervising
module computes the optimal non-contradictory assignment
of percepts to component anchors. Percepts that are not assigned
to any of the existing anchoring processes are used to establish new
anchors. Additionally, an anchor that was not updated for a certain
period of time will be removed by the supervising module.
PERCEPTUAL SYSTEMS
In order to supply the anchoring framework presented in 4.1 with
sensory information about observed persons, three different perceptual
systems are used. These are outlined in the following subsec-tions
.
5.1
Face Recognition
In our previous work [6], face detection was realized using a
method which combines adaptive skin-color segmentation with
face detection based on Eigenfaces [7]. The segmentation process
reduces the search space, so that only those sub-images which are
located at skin colored regions have to be verified with the Eigenface
method. In order to cope with varying lighting conditions the
model for skin-color is continuously updated with pixels extracted
from detected faces. This circular process requires initialization,
which is realized by performing face detection using Eigenfaces on
the whole image, since initially no suitable model for skin-color is
available. This method has two major drawbacks: It is very sensitive
to false positive detections of faces, since then the skin-model
may adapt to a wrong color. In addition, initialization is computa-tionally
very expensive.
In our current system presented in this paper, the detection of
faces (in frontal view) is based on the framework proposed by Viola
and Jones [20]. This method allows to process images very rapidly
with high detection rates for the task of face detection. Therefore,
neither a time consuming initialization nor the restriction of the
search using a model of skin color is necessary.
The detection is based on two types of features (Fig. 4), which
are the same as proposed in [24]. A feature is a scalar value which
is computed by the weighted sum of all intensities of pixels in rectangular
regions. The computation can be realized very efficiently
using integral images (see [20]). The features have six degrees of
freedom for two-block features (
) and seven degrees
of freedom for three-block features (
).
With restrictions to the size of the rectangles and their distances we
obtain about 300.000 different features for sub-windows of a size
of
pixels. Classifiers are constructed by selecting a small
number of important features using AdaBoost [5]. A cascade of
classifiers
of increasing complexity (increasing number
of features) forms the over-all face detector (Fig. 5). For face
detection an image is scanned, and every sub-image is classified
2
1
dy
2
dx
1
dy
dx
2
w
h
w
dx
h
x y
x y
Figure 4: The two types of features used for face detection.
Each feature takes a value which is the weighted sum of all pixels
in the rectangles.
.....
Non-face
Non-face
No
No
Input Sub-Window
Yes
Yes
Non-face
Face
No
C
n
C
2
C
1
Yes
Figure 5: A cascade of
classifiers of increasing complexity
enables fast face detection.
with the first classifier
of the cascade. If classified as non-face,
the process continues with the next sub-image. Otherwise the current
sub-image is passed to the next classifier (
) and so on.
The first classifier of the cascade is based on only two features,
but rejects approximately 75 % of all sub-images. Therefore, the
detection process is very fast. The cascade used in our system consists
of 16 classifiers based on 1327 features altogether.
In order to update the multi-modal anchoring process the position
of the face is extracted: With the orientation of the pan-tilt
camera, the angle of the face relative to the robot is calculated. The
size of the detected face is used to estimate the distance of the person
: Assuming that sizes of heads of adult humans only vary to a
minor degree, the distance is proportional to the reciprocal of the
size. The height of the face above the ground is also extracted by
using the distance and the camera position.
Since the approach presented so far does not provide face identification
, a post-processing step is is required. Therefore, we use a
slightly enhanced version of the Eigenface method [19]. Each individual
is represented in face space by a mixture of several Gaussians
with diagonal covariances. Practical experiments have shown
that the use of four to six Gaussians leads to a satisfying accuracy
in discriminating between a small set of known persons.
5.2
Sound Source Localization
In order to detect speaking persons, we realize the localization
of sound sources using a pair of microphones. Given a sound
source
in 3D space, the distances
and
between
and the
two microphones
and
generally differ by the amount of
(see Fig. 6). This difference
results in a time
delay
of the received signal between the left and the right channel
(microphone). Based on Cross-Powerspectrum Phase Analysis [8]
we first calculate a spectral correlation measure
(1)
where
and
are the short-term power spectra of the
left and the right channel, respectively (calculated within a 43 ms
window from the signal sampled at 48 kHz). If only a single sound
31
d
10 cm
d
=0
c
m
d
=5c
m
d
= 1
0c
m
d
= 2
0 c
m
d
= 25
cm
d
= 1
5 c
m
s
m
2
m
1
d
1
d
2
b
= 28.1 cm
Figure 6: The distances
and
between the sound source
and the two microphones
and
differ by the amount of
. All sound events with identical
are located on one half
of a two-sheeted hyperboloid (gray).
source is present the time delay
will be given by the argument
that maximizes the spectral correlations measure
:
(2)
Taking into account also local maxima delivered by equation (1),
we are able to detect several sound sources simultaneously.
Even in the planar case, where all sound sources are on the same
level as the microphones, the position of
can be estimated only
if its distance is known or additional assumptions are made. In a
simplified geometry the microphone distance
is considered suf-ficiently
small compared to the distance of the source. Therefore,
the angles of incidence of the signals observed at the left or right
microphone, respectively, will be approximately equal and can be
calculated directly from
. In the 3D-case the observed time delay
not only depends on the direction and distance but also on the
relative elevation of the source with respect to the microphones.
Therefore, given only
the problem is under-determined.
All sound events which result in the same
are located on one
half of a two-sheeted hyperboloid, given by
(3)
where
is the position of the sound source given in
Cartesian coordinates. The axis of symmetry of the hyperboloid coincides
with the axis on which the microphones are located (y-axis).
Figure 6 shows the intersections of hyperboloids for different
with the plane spanned by
,
, and
. Consequently, the localization
of sound sources in 3D using two microphones requires
additional information.
As in our scenario sound sources of interest correspond to persons
talking, the additional spatial information necessary can be
obtained from the other perceptual systems of the multi-modal anchoring
framework. Leg detection and face recognition provide
information about the direction, distance, and height of a person
with respect to the local coordinate system of the robot. Even if no
face was detected at all, the height of a person can be estimated as
the standard size of an adult.
In order to decide whether a sound percept can be assigned to
a specific person, the sound source has to be located in 3D. For
this purpose it is assumed that the sound percept originates from
100 cm
Robot
Figure 7: A sample laser scan. The arrow marks a pair of legs.
the person and is therefore located at the same height and same distance
. Then, the corresponding direction of the sound source can be
calculated from equation (3) transformed to cylindric coordinates.
Depending on the difference between this direction and the direction
in which the person is located, the sound percept is assigned to
the person's sound anchor. Similar to other component anchors, the
direction of the speech is also fused with the position of the person.
Note that the necessity of positional attributes of a person for the
localization of speakers implies that speech can not be anchored
until the legs or the face of a person have been anchored.
In conclusion, the use of only one pair of microphones is sufficient
for feasible speaker localization in the multi-modal anchoring
framework.
5.3
Leg Detection
The laser range finder provides distance measures within a
field of view at leg-height. The angular resolution is
resulting
in 361 reading points for a single scan (see Fig. 7 for an example
). Usually, human legs result in a characteristic pattern which
can be easily detected. This is done as follows: At first, neighboring
reading points with similar distance values are grouped into
segments. Then, these segments are classified as legs or non-legs
based on a set of features (see [6]). Finally, legs with a distance that
is below a threshold are grouped into pairs of legs.
FOCUSING THE ATTENTION
For the detection of a person of interest from our mobile robot
we apply multi-modal person tracking, as described in section 4.
Every person in the vicinity of the robot is anchored and, therefore,
tracked by an individual anchoring process, as soon as the legs or
the face can be recognized by the system.
If the robot detects that a person is talking, this individual becomes
the person of interest and the robot directs its attention towards
it. This is achieved by turning the camera into the direction
of the person. The anchoring process corresponding to the person
of interest maintains access to the pan-tilt camera and keeps the
person in the center of the camera's field of view. If necessary, the
entire robot basis is turned in the direction of the person of interest.
If this person moves to far away from the robot, the robot will start
to follow the person. This behavior ensures that the sensors of the
robot do not loose track of this person. Moreover, the person can
guide the robot to a specific place.
As long as the speech of the person of interest is anchored, other
people talking are ignored. This allows the person of interest to
take breath or make short breaks while speaking without loosing the
robots attention. When the person of interest stops talking for more
than two seconds, the person of interest looses its speech anchor.
Now, another person can become the person of interest. If no other
person is speaking in the vicinity of the robot, the person which
32
(2)
(3)
(4)
(1)
P
1
P
1
P
1
P
1
P
2
P
2
P
2
P
2
R
R
R
R
Figure 8: Sample behavior with two persons
and
standing
near the robot
: In (1)
is considered as communication
partner, thus the robot directs its attention towards
. Then
stops speaking but remains person of interest (2). In (3)
begins to speak. Therefore the robot's attention shifts to
by turning its camera (4). Since
is facing the robot,
is
considered as new communication partner.
is in the focus of attention of the robot remains person of interest.
Only a person that is speaking can take over the role of the person
of interest. Notice, that a person which is moving fast in front of
the robot is considered as a passer-by, and hence is definitely no
person of interest even if this person is speaking.
In addition to the attention system described so far, which enables
the robot to detect the person of interest and to maintain its
attention during interaction, the robot decides whether the person of
interest is addressing the robot and, therefore, is considered as communication
partner. This decision is based on the orientation of the
person's head, as it is assumed that humans face their addressees for
most of the time while they are talking to them. Whether a tracked
person faces the robot or not is derived from the face recognition
system. If the face of the person of interest is detected for more than
20 % of the time the person is speaking, this person is considered
to be the communication partner.
A sample behavior of the robot is depicted in Figure 8.
SYSTEM PERFORMANCE
In order to analyze the performance of the proposed approach,
we present results from three different types of evaluation. At
first, we study the accuracy of sound source localization independently
. The second part deals with a quantitative evaluation of our
approach for a multi-modal attention system. Finally, qualitative
results from a performance of the robot at the exhibition part of the
ICVS'03 are presented.
7.1
Evaluation of Sound Source Localization
The objective of this evaluation was to analyze the accuracy of
locating speakers with a pair of microphones using the method described
in section 5.2 independently from the multi-modal anchoring
framework. In order to be able to estimate the arrival angle relative
to the microphones, the setup for the experiment was arranged
such that the sound source (mouth of the speaker) was always at the
same height as the microphones. Therefore, the simplified geometric
model mentioned in section 5.2 can be used.
The experiments were carried out with five subjects. Every subject
was positioned at six different angles (
,
,
,
,
,
and
), and at two different distances (100 cm and 200 cm), respectively
, resulting in 12 positions altogether. At every position
a subject had to read out one specific sentence which took about
8 seconds. During every utterance the position of the speaker was
calculated every 50 ms.
Based on the angles estimated by our localization algorithm we
calculated the mean angle and the variance for every speaker. It is
important to note, that in our setup it is almost impossible to position
the test speaker accurately on the target angle. For this reason,
Distance between speaker and robot
Angle
100 cm
200 cm
0
-0.9
0.56
-0.3
0.81
10
9.1
0.34
9.2
0.37
20
18.9
0.21
19.3
0.27
40
38.2
0.50
38.8
0.22
60
57.7
0.40
57.5
0.64
80
74.0
2.62
73.3
2.18
Table 1: Averaged estimated speaker positions
and averaged
variances
for the acoustic speaker localization.
we used the mean estimated angle for every speaker instead of the
target angle to calculate the variance. Following the calculation of
mean angle and variance for every speaker we averaged for every
position the mean angle and the variance across all speakers.
Table 1 shows the results of our experiments. First, the results
suggest that the robot was not correctly aligned, because especially
for small angles (0
to 20
) the averaged angle differs constantly
from the target angle about 1
. Under this justifiable assumption
the speaker localization works very well for angles between 0
and
40
. The estimated angle is nearly equivalent to the actual angle
and the variance is also very low. Furthermore, the acoustic position
estimation works equally well for 100 cm and for 200 cm.
For angles greater than 40
the difference between estimated angle
and target angle as well as the variance increases. This means that
the accuracy of the acoustic position estimation decreases with an
increasing target angle. The main reason for this behavior is the
directional characteristic of the microphones.
However, the evaluation has shown that the time delay estimation
works reasonably well. Thus the sound source localization
provides important information for the detection and localization
of the current person of interest.
7.2
Evaluation of the Attention System
The objective of this evaluation was to analyze the performance
of the attention system presented in this paper. On the one hand,
the capability of the robot to successfully shift its attention on the
speaker, and to recognize when it was addressed was investigated.
On the other hand, details about the perceptual sub-systems were
of interest.
The experiment was carried out in an office room (Fig. 9). Four
persons were standing around the robot at designated positions. In
reference to the local coordinate system of the robot, person
was located at a distance of
cm and an angle of
, where
is defined as the direction ahead of the robot. Person
was
located at
cm
, person
at
cm
, and person
at
cm
. The subjects were asked to speak for about
10 seconds, one after another. They had to either address the robot
or one of the other persons by turning their heads into the corresponding
direction. There were no restrictions on how to stand.
The order in which the persons were speaking was predetermined
(see Table 2). The experiment was carried out three times with nine
different subjects altogether.
The following results were achieved:
The attention system was always able to determine the correct
person of interest within the time the person was speaking
. However, in some situations either the reference to the
33
P2
R
P3
P4
P1
Figure 9: Setup for the evaluation of the attention system.
Person
Step
1
2
3
4
5
6
7
8
9
11 12
10
P
4
P
3
P
2
P
1
Table 2: Order in which the persons were speaking, either to
the robot (steps 14 and 912) or to another person (steps 58).
last person of interest was sustained too long or an incorrect
person of interest was selected intermediately. A diagram of
the robot's focus of attention is shown in Figure 10. The erroneous
behavior occurred in 4 of the 36 time slices: In these
cases, the robot shifted its attention to a person which was
currently not speaking (see column 5 in all experiments and
column 4 in the last experiment in Fig. 10). Note that in all
failure cases person
, which was located in front of the
robot, was selected as person of interest. In addition, there
were two shifts which were correct but had a very long delay
(eighth time slice of the first and the third experiment).
Again, the person in front of the robot (
) was involved.
All errors occurred because a sound source was located in the
direction of
, although person
was not speaking. This
can be explained with the noise of the robot itself, which is
interpreted as a sound source in the corresponding direction.
This error could be suppressed using voice activity detection,
which distinguishes speech from noise. This will be part of
our future work.
As the diagram in Fig. 10 shows, every shift of attention had
a delay of approximately 2 seconds. This results from the
anchoring framework: The anchor for the sound source is
removed after a period of 2 seconds with no new assigned
percepts. Now, if another person is talking it becomes the
person of interest.
The decision whether the current person of interest was addressing
the robot or not was made as described in section 6.
It was correct for all persons in all runs. This means that the
robot always determined himself as addressee in steps 14
and 912, and never in steps 58.
These results prove that the presented approach for a multi-modal
attention system on a mobile robot is capable to identify communication
partners successfully.
P
1
P
3
P
4
P
2
1
2
3
4
5
6
7
8
9
10
11
12
1st
P
1
P
2
P
3
P
4
3rd
P
1
P
2
P
3
P
4
2nd
Figure 10: Diagram for the three runs of the experiment. Every
person is assigned a track (light-gray) which is shaded while the
person was speaking. The solid line shows which person was in
focus of the robot's attention.
In addition the following measurements concerning the anchoring
framework were extracted during the experiments: The attention
system and the face recognition were running on one PC (Pentium
III, 500 MHz), while the sound source localization and the
robot control software were running on the other PC (Pentium III,
850 MHz). Face recognition was performed on images of a size of
at a rate of 9.6 Hz. Localization of sound sources was
running at a rate of 5.5 Hz. The laser range finder provided new
data at a rate of 4.7 Hz while the processing time for the detection
of legs was negligible.
The anchoring processes of the persons which were currently
speaking to the robot were updated with percepts at a rate of
15.4 Hz. Face percepts were assigned to the corresponding anchor
at 71.4 % of the time. Note, that after a new person of interest
is selected it takes up to approximately 1 second until the camera
is turned and the person is in the field of view. During this time,
no face percept for the person of interest can be generated. Sound
percepts were assigned at 69.5 % of the time, and leg percepts at
99.9 % of the time.
The multi-modal anchoring framework was able to quantify the
body heights of all subjects with an accuracy of at least
5 cm,
which was sufficient to precisely locate sound sources in 3D (see
section 5.2).
7.3
Performance at an Exhibition
In the beginning of April 2003 our robot was presented at the
exhibition part of the International Conference on Computer Vision
Systems (ICVS) in Graz. There we were able to demonstrate
the robot's capabilities in multi-modal person tracking, and also in
following people. BIRON was continuously running without any
problems.
On the two exhibition days, the robot was running 9:20 hours
and 6:30 hours, respectively, tracking about 2240 persons on the
first day, and about 1400 persons on the second day. The large
amount of persons tracked results from the following condition:
Every person which came in the vicinity of the robot was counted
once. However, if a person left the observed area and came back
later, it was counted again as a new person.
Since the coffee breaks of the conference took place in the exhibition
room, there were extremely busy phases. Even then, the
robot was able to track up to 10 persons simultaneously. Despite
the high noise level, the sound source localization worked reliably,
even though it was necessary to talk slightly louder to attract the
robot's attention.
34
SUMMARY
In this paper we presented a multi-modal attention system for
a mobile robot. The system is able to observe several persons in
the vicinity of the robot and to decide based on a combination of
acoustic and visual cues whether one of these is willing to engage
in a communication with the robot. This attentional behavior is realized
by combining an approach for multi-modal person tracking
with the localization of sound sources and the detection of head
orientation derived from a face recognition system. Note that due
to the integration of cues from multiple modalities it is possible to
verify the position of a speech source in 3D space using a single
pair of microphones only. Persons that are observed by the robot
and are also talking are considered persons of interest. If a person
of interest is also facing the robot it will become the current communication
partner. Otherwise the robot assumes that the speech
was addressed to another person present.
The performance of our approach and its robustness even in real
world situations were demonstrated by quantitative evaluations in
our lab and a qualitative evaluation during the exhibition of the
mobile robot system at the ICVS'03.
ACKNOWLEDGMENTS
This work has been supported by the German Research Foundation
within the Collaborative Research Center 'Situated Artificial
Communicators' and the Graduate Programs 'Task Oriented Com-munication'
and 'Strategies and Optimization of Behavior'.
REFERENCES
[1] M. Andersson, A. Oreback, M. Lindstrom, and H. I.
Christensen. ISR: An intelligent service robot. In H. I.
Christensen, H. Bunke, and H. Noltmeier, editors, Sensor
Based Intelligent Robots; International Workshop Dagstuhl
Castle, Germany, September/October 1998, Selected Papers,
volume 1724 of Lecture Notes in Computer Science, pages
287310. Springer, New York, 1999.
[2] B. Berdugo, J. Rosenhouse, and H. Azhari. Speakers'
direction finding using estimated time delays in the
frequency domain. Signal Processing, 82:1930, 2002.
[3] W. Burgard, A. B. Cremers, D. Fox, D. Hahnel,
G. Lakemeyer, D. Schulz, W. Steiner, and S. Thrun. The
interactive museum tour-guide robot. In Proc. Nat. Conf. on
Artificial Intelligence (AAAI), pages 1118, Madison,
Wisconsin, 1998.
[4] S. Coradeschi and A. Saffiotti. Perceptual anchoring of
symbols for action. In Proc. Int. Conf. on Artificial
Intelligence, pages 407412, Seattle, WA, 2001.
[5] Y. Freund and R. E. Shapire. A decision-theoretic
generalization of on-line learning and an application to
boosting. Computational Learning Theory: Eurocolt '95,
pages 2327, 1995.
[6] J. Fritsch, M. Kleinehagenbrock, S. Lang, T. Plotz, G. A.
Fink, and G. Sagerer. Multi-modal anchoring for
human-robot-interaction. Robotics and Autonomous Systems,
Special issue on Anchoring Symbols to Sensor Data in Single
and Multiple Robot Systems, 43(23):133147, 2003.
[7] J. Fritsch, S. Lang, M. Kleinehagenbrock, G. A. Fink, and
G. Sagerer. Improving adaptive skin color segmentation by
incorporating results from face detection. In Proc. IEEE Int.
Workshop on Robot and Human Interactive Communication
(ROMAN), pages 337343, Berlin, Germany, 2002.
[8] D. Giuliani, M. Omologo, and P. Svaizer. Talker localization
and speech recognition using a microphone array and a
cross-powerspectrum phase analysis. In Proc. Int. Conf. on
Spoken Language Processing, volume 3, pages 12431246,
Yokohama, Japan, 1994.
[9] E. Hjelmas and B. K. Low. Face detection: A survey.
Computer Vision and Image Understanding, 83(3):236274,
2001.
[10] Y. Huang, J. Benesty, G. W. Elko, and R. M. Mersereau.
Real-time passiv source localization: A practical
linear-correction least-square approach. IEEE Trans. on
Speech and Audio Processing, 9(8):943956, 2001.
[11] C. H. Knapp and G. C. Carter. The generalized correlation
method for estimation of time delay. IEEE Trans. on
Acoustics, Speech and Signal Processing,
ASSP-24(4):320327, 1976.
[12] Y. Matsusaka, S. Fujie, and T. Kobayashi. Modeling of
conversational strategy for the robot participating in the
group conversation. In Proc. European Conf. on Speech
Communication and Technology, pages 21732176, Aalborg,
Denmark, 2001.
[13] H. G. Okuno, K. Nakadai, and H. Kitano. Social interaction
of humanoid robot based on audio-visual tracking. In Proc.
Int. Conf. on Industrial and Engineering Applications of
Artificial Intelligence and Expert Systems, Cairns, Australia,
2002.
[14] V. Pavlovic, A. Garg, J. Rehg, and T. Huang. Multimodal
speaker detection using error feedback dynamic bayesian
networks. In Proc. Int. Conf. on Computer Vision and Pattern
Recognition, pages 3443, Los Alamitos, CA, 2000.
[15] H. Rowley, S. Baluja, and T. Kanade. Neural network-based
face detection. IEEE Trans. on Pattern Analysis and Machine
Intelligence, 20(1):2338, 1998.
[16] R. D. Schraft, B. Graf, A. Traub, and D. John. A mobile
robot platform for assistance and entertainment. Industrial
Robot, 28(1):2934, 2001.
[17] D. Schulz, W. Burgard, D. Fox, and A. B. Cremers. Tracking
multiple moving objects with a mobile robot. In Proc. Int.
Conf. on Computer Vision and Pattern Recognition,
volume 1, pages 371377, Kauwai, Hawaii, 2001.
[18] R. Stiefelhagen, J. Yang, and A. Waibel. Estimating focus of
attention based on gaze and sound. In Workshop on
Perceptive User Interfaces, Orlando, FL, 2001.
[19] M. Turk and A. Pentland. Eigenfaces for recognition.
Journal of Cognitive Neuro Science, 3(1):7186, 1991.
[20] P. Viola and M. Jones. Robust real-time object detection. In
Proc. IEEE Int. Workshop on Statistical and Computational
Theories of Vision, Vancouver, Canada, 2001.
[21] J. Yang and A. Waibel. A real-time face tracker. In Proc.
IEEE Workshop on Applications of Computer Vision, pages
142147, Sarasota, Florida, 1996.
[22] M. H. Yang, D. J. Kriegman, and N. Ahuja. Detecting faces
in images: A survey. IEEE Trans. on Pattern Analysis and
Machine Intelligence, 24(1):3458, 2002.
[23] A. L. Yuille, P. W. Hallinan, and D. S. Cohen. Feature
extraction from faces using deformable templates. Int.
Journal of Computer Vision, 8(2):99111, 1992.
[24] Z. Zhang, L. Zhu, S. Z. Li, and H. Zhang. Real-time
multi-view face detection. In Proc. Int. Conf. on Automatic
Face and Gesture Recognition, Washington, DC, 2002.
35
| Multi-modal person tracking;Attention;Human-robot-interaction |
155 | Psychologically Targeted Persuasive Advertising and Product Information in E-Commerce | In this paper, we describe a framework for a personalization system to systematically induce desired emotion and attention related states and promote information processing in viewers of online advertising and e-commerce product information. Psychological Customization entails personalization of the way of presenting information (user interface, visual layouts, modalities, structures) per user to create desired transient psychological effects and states, such as emotion, attention, involvement, presence, persuasion and learning. Conceptual foundations and empiric evidence for the approach are presented. | INTRODUCTION
Advertising and presentation of product information is done both
to inform people about new products and services and to persuade
them into buying them. Persuasion can be thought of as
influencing peoples attitudes and behavior. Advertising done in a
mass medium, such as television or magazines can be segmented
to desired audiences. However, there is a possibility to
personalize or mass customize advertising in the internet and for
instance in mobile phones. Similarly, in the internet also product
information of various items for sale can be personalized to
desired users. These two areas are introduced here together as
they represent interesting opportunities for personalization.
Consequently, personalization may turn out to be an important
driver for future commercial applications and services in a one-to-one
world in which automatic and intelligent systems tailor the
interactions of users, contexts and systems in real-time. This
paper describes the foundations of information personalization
systems that may facilitate desired psychological states in
individual users of internet based advertising and product
information presentation in e-commerce thereby creating
psychologically targeted messages for users of such systems. It is
preliminarily hypothesized that such personalization may be one
way to more efficient persuasion.
When perceiving information via media and communications
technologies users have a feeling of presence. In presence, the
mediated information becomes the focused object of perception,
while the immediate, external context, including the technological
device, fades into the background [8, 36, 37]. Empirical studies
show that information experienced in presence has real
psychological effects on perceivers, such as emotional responses
based on the events described or cognitive processing and
learning from the events [see 51]. It is likely that perceivers of
advertisements and product information experience presence that
may lead to various psychological effects. For instance, an
attitude may be hold with greater confidence the stronger the
presence experience.
Personalization and customization entails the automatic or semiautomatic
adaptation of information per user in an intelligent way
with information technology [see 33, 68]. One may also vary the
form of information (modality for instance) per user profile,
which may systematically produce, amplify, or shade different
psychological effects [56, 57, 58, 59, 60, 61, 62, 63].
Media- and communication technologies as special cases of
information technology may be considered as consisting of three
layers [6]. At the bottom is a physical layer that includes the
physical technological device and the connection channel that is
used to transmit communication signals. In the middle is a code
layer that consists of the protocols and software that make the
physical layer run. At the top is a content layer that consists of
multimodal information. The content layer includes both the
substance and the form of multimedia content [7, 56]. Substance
refers to the core message of the information. Form implies
aesthetic and expressive ways of organizing the substance, such
as using different modalities and structures of information [56].
With the possibility of real-time customization and adaptation of
information for different perceivers it is hypothesized that one
may vary the form of information within some limits per the same
substance of information. For instance, the same substance can be
expressed in different modalities, or with different ways of
interaction with the user and technology. This may produce a
certain psychological effect in some perceivers; or shade or
amplify a certain effect. In Figure 1 the interaction of media and
communications technology and the user in context with certain
types of tasks is seen as producing transient psychological effects,
thereby creating various "archetypal technologies" that
systematically facilitate desired user experiences [see 55, 56].
Media and communication technology is divided into the
physical, code and content layers. The user is seen as consisting
of various different psychological profiles, such as individual
differences related to cognitive style, personality, cognitive
ability, previous knowledge (mental models related to task) and
other differences, such as pre-existing mood. [49, 56, 58, 59]
Media- and communication technologies may be called Mind-Based
if they simultaneously take into account the interaction of
three different key components: i) the individual differences
and/or user segment differences of perceptual processing and
sense making ii) the elements and factors inherent in information
and technology that may produce psychological effects (physical,
code and content layers), and iii) the consequent transient
psychological effects emerging based on perception and
processing of information at the level of each individual. [see 63]
This definition can be extended to include both context and at
least short-term behavioral consequences. Regarding context, a
Mind-Based system may alter its functionalities depending on
type of task of the user, physical location, social situation or other
ad-hoc situational factors that may have a psychological impact.
Behavioral consequences of using a Mind-Based system may be
thought of especially in the case of persuasion as facilitating
desired instant behaviors such as impulse buying. Of course, if a
Mind-Based system builds a positive image and schema of a
product over longer periods of time reflected in product and brand
awareness that may the influence user behaviors later on.
As the task of capturing and predicting users psychological state
in real time is highly complex, one possible realization for
capturing users psychological state is to have the user linked to a
sufficient number of measurement channels of various i)
psychophysiological signals (EEG, EMG, GSR, cardiovascular
activity, other), ii) eye-based measures (eye blinks, pupil dilation,
eye movements) and iii) behavioral measures (response speed,
response quality, voice pitch analysis etc.). An index based on
these signals then would verify to the system whether a desired
psychological effect has been realized.
Fig. 1. Mind-Based Technologies as a framework for
producing psychological effects. Adapted from [56].
Another approach would be to conduct a large number of user
studies on certain tasks and contexts with certain user groups,
psychological profiles and content-form variations and measure
various psychological effects as objectively as possible. Here,
both subjective methods (questionnaires and interviews) and
objective measures (psychophysiological measures or eye-based
methods) may be used as well interviews [for a review on the use
of psychophysiological methods in media research, see 46]. This
would constitute a database of design-rules for automatic
adaptations of information per user profile to create similar effects
in highly similar situations with real applications. Naturally, a
hybrid approach would combine both of these methods for
capturing and facilitating the users likely psychological state.
Capturing context and short-term user behavior is a challenge.
Computational approach to context utilizes a mass of sensors that
detect various signals in an environment. AI-based software then
massively computes from the signal flow significant events either
directly or with the help of some simplifying rules and algorithms.
Capturing user behavior in context is easier if the user is using an
internet browser to buy an item, for instance. In this case action,
or behavior can be captured by the system as the user clicks his
mouse to buy an item. If the user is wondering around in a
supermarket with a mobile phone that presented a persuasive
message to buy the item on aisle 7 it may be difficult to verify this
other than cross-reference his checkout bill with the displayed
adverts inside the store. However, it is beyond the scope of this
paper to fully elaborate on the contextual and behavioral
dimensions of Mind-Based Technologies.
2.2 Description of a Psychological
Customization System
Psychological Customization is one possible way of
implementing of Mind-Based Technologies in system design. It
can be applied to various areas of HCI, such as Augmentation
Systems (augmented and context sensitive financial news),
Notification Systems (alerts that mobilize a suitable amount of
attention per task or context of use), Affective Computing
(emotionally adapted games), Collaborative Filtering (group-focused
information presentation), Persuasive Technology
Media and Communications
Technology
Substance
Form
- modalities
- visual layout
- structure
- other
Type of device
Ways of interaction
User interface
Technologies influencing
subjective experience
-Emotion Technologies
-Flow Technologies
-Presence Technologies
Technologies influencing
subjective experience
-Emotion Technologies
-Flow Technologies
-Presence Technologies
Technologies influencing
change of knowledge
- Knowledge Technologies
Technologies influencing
change of knowledge
- Knowledge Technologies
Mind
Psychological
profiles
- cognitive style
- personality
- mental models
- other
Results of
processing
Results of
processing
Physical
Content
Code
Context and Task
246
246
(advertising for persuasion, e-commerce persuasion), Computer
Mediated Social Interaction Systems (collaborative work, social
content creation templates), Messaging Systems (emotionally
adapted mobile multimedia messaging and email) and
Contextually Sensitive Services (psychologically efficient
adaptation of presentation of information sensitive to physical,
social or situational context, such as available menus to control a
physical space, available information related to a particular
situation, such as social interaction or city navigation with a
mobile device).
It can be hypothesized that the selection and manipulation of
substance of information takes place through the technologies of
the various application areas of Psychological Customization.
Underlying the application areas is a basic technology layer for
customizing design. This implies that within some limits one may
automatically vary the form of information per a certain category
of substance of information. The design space for Psychological
Customization is formed in the interaction of a particular
application area and the possibilities of the technical
implementation of automated design variation Initially,
Psychological Customization includes modeling of individuals,
groups, and communities to create psychological profiles and
other profiles based on which customization may be conducted. In
addition, a database of design rules is needed to define the desired
cognitive and emotional effects for different types of profiles.
Once these components are in place, content management
technologies can be extended to cover variations of form and
substance of information based on psychological profiles and
design rules to create the desired psychological effects. [see 63]
At the technically more concrete level, a Psychological
Customization System is a new form of middleware between
applications, services, content management systems and
databases. It provides an interface for designing desired
psychological effects and user experiences for individual users or
user groups. The most popular framework for building customized
Web-based applications is Java 2 Enterprise Edition. J2EE-based
implementation of the Psychological Customization System for
Web-based applications is depicted in Figure 2. The basic J2EE
three-tiered architecture consisting of databases, application
servers, and presentation servers has been extended with three
middleware layers: content management layer, customer
relationship management layer, and psychological customization
layer. The profiles of the users and the communities are available
in the profile repository. [see 69]
The Content Management System is used to define and manage
the content repositories. This typically is based on metadata
descriptions of the content assets. The metadata of the content
repositories is matched against the user and community profiles
by the Customer Relationship Management (CRM) system. The
CRM system includes tools for managing the logic behind
content, application and service customization. Rules can be
simple matching rules or more complex rule sets. A special case
of a rule set are scenarios, which are rule sets involving sequences
of the interactions on the Web site. The Customer Relationship
Management layer also includes functionality for user and
community modeling. This layer can also perform automated
customer data analysis, such as user clustering. [see 69]
The Psychological Customization System layer performs the
optimization of the form of the content as selected by the
Customer Relationship Management layer. This functionality can
be considered similar to the device adaptation by using content
transformation rules (for example XSL-T). In the case of the
psychological customization, the transformation rules are
produced based on the design rules for content presentation
variation and the contents of the psychological profile of the user.
After this optimization, the content is passed to the Web
presentation layer.
Community
Profiles
Content
Rules and
Scenarios
Application Server
User
User
User
User
profile
Community
profile
Content
Management
System
Customer
Relationship
Management
Web Presentation Layer
Psychological
Customization
System
Figure 2. J2EE implementation of the Psychological
Customization System [69]
Even though a working prototype of Psychological Customization
has not been built yet, several empirical studies support the
feasibility of a user-experience driven system that matches the
form of information to the psychologically relevant properties and
other profile factors of individual users and user groups.
For instance, there are individual differences in cognitive
processes such as attention, working memory capacity, general
intelligence, perceptual-motor skills and language abilities. These
individual differences have a considerable effect on computer-based
performance and may product sometimes quite large
variance in the intensity or type of psychological effects, such as
depth of learning, positive emotion, persuasion, presence, social
presence and other types of psychological states and effects as
well as consequent behavior [13, 14, 18, 56, 57, 58, 59, 60, 61,
62, 63, 70].
There is considerable evidence in literature and in our own
experimental research that varying the form of information, such
as modality, layouts, background colors, text types, emotionality
of the message, audio characteristics, presence of image motion
and subliminality creates for instance emotional, cognitive and
attentional effects [9, 25, 27, 28, 29, 30, 31, 32, 33, 34, 48]. Some
of these effects are produced in interaction with individual
differences, such as cognitive style, personality, age and gender
[21, 46, 47], or pre-existing mood [49]. The role of hardware
should not be neglected. A device with a large screen or a
247
247
portable device with smaller screen with user-changeable covers
may also influence the emerging effects [e.g. 30].
Table 1. Key factors influencing psychological effects.
Adapted from [56].
Layer of
technology
Key factors
Physical
Hardware
- large or small vs. human scale
- mobile or immobile
- close or far from body (intimate-personal
-social distance)
Interaction
- degree of user vs. system
control and proactivity through
user interface
Code
Visual-functional aspects
- way of presenting controls in an
interface visually and functionally
Substance
- the essence of the event
described
- type of substance
(factual/imaginary; genre, other)
- narrative techniques used by
authors
Content
Form
1. Modalities
- text, video, audio, graphics,
animation, etc.
2. Visual layout
- ways of presenting various shapes,
colours, font types, groupings and
other relationships or expressive
properties of visual representations
- ways of integrating modalities into
the user interface
3. Structure
- ways of presenting modalities,
visual layout and other elements of
form and their relationships over time
- linear and/or non-linear
structure (sequential vs.
parallel; narrative
techniques,
hypertextuality)
This empiric evidence partly validates the possibility for
Psychological Customization Systems at least with mobile
devices and user interface prototypes used in our own research.
Typical experiments we have conducted on the influence of form
of information on psychological effects have included such
manipulations as animation and movement (for orientation
response), fonts of text, layout of text, background colors of text,
user interface navigation element shapes (round vs. sharp), user
interface layout directions, adding background music to reading
text, use of subliminal affective priming in the user interface
(emotionally loaded faces) and use of different modalities, for
instance. Table 1 addresses the key factors that may influence
psychological effects of processing mediated information.
APPLICATION AREAS
The focus of this paper is on persuasion in advertising and
product information presentation in e-commerce. The key
application area to realize this with Psychological Customization
is Persuasive Technology. It refers to human-computer interaction
in which there is an underlying goal to non-coercively change the
attitudes, motivation and/or behavior of the user [15, 16]. For
instance, one may motivate users to quit smoking via motivating
games.
However, it is clear that how much people allocate resources to
processing a particular persuasive message has to be taken into
account. Further, it may be that there is not so much freedom to
manipulate persuasive messages to produce effects and the effects
themselves may be sometimes small. Despite this, empiric
evidence in personalization, as discussed, suggests that
statistically significant effects perhaps in the range of a few
percentages to even tens of percents exist in the area of
Psychological Customization, such as emotion, presence and
efficiency of information processing. Hence, it can be at least
preliminarily assumed that with persuasion similar level effects
may be achievable also.
Persuasion in human computer interaction has been researched
from the point of view of seeing computers as tools (increasing
capabilities of the user), as a medium (providing experiences) and
as a social actor (creating a relationship) [15, 16]. For the
purposes of this article, technology used in Psychological
Customization for presentation of e-commerce product
information and online advertising is seen mostly as a medium
and partly as a social actor. How then to model and explain
persuasion in more detail? Evidently no universal theory of what
is the process of persuasion has been created yet [17].
Candidates for explaining and modeling persuasion include i)
learning theory (operant conditioning), ii) functional paradigm
theory (similarity-attraction, pleasure seeking), iii) cognitive
consistency theory (new information creates tension that needs to
be relieved by adopting schemata), iv) congruity principle theory
(interpretations of new information tend to be congruent with
existing schemata), v) cognitive dissonance theory (certain
actions and information produce tension that needs to be relieved
by adopting mental structures or behavior), counter-attitudinal
advocacy (belief-discrepant messages are persuasive), vi)
inoculation theory (combining supportive and refutational
information to achieve better persuasion) and vii) attribution
theory (people make simple models to predict events of the world
and behaviors of other people). [for a review, see 50].Some
contemporary models of persuasion are i) social learning theory
(environmental learning is the source of persuasion, such as social
relationships), ii) the elaboration likelihood model (a specific and
limited model on how a piece of information may influence the
attitudes of the receiver) and iii) the communication/persuasion
model (the source, the message content and form, the channel, the
properties of the receiver and the immediacy of the
communication influence persuasion). [2, 39, 42]
The latter approach partly resembles the approach of Mind-Based
Technologies as a way of finding out the values of relevant
parameters in the layers of technology, the user and the transient
results of processing, such as emotion and cognition. Other
frameworks have also been presented. Meyers-Levy and
Malaviya (1998) have presented a framework introducing several
248
248
strategies to process persuasive messages. Each strategy
represents a different amount of cognitive resources employed
during processing and may influence the level of persuasion. [38]
The position of the authors is that while various theories and
models for persuasion have been presented, within the context of
personalized information presentation especially by varying both
the substance of the message and the form of the message it is
difficult to know what types of persuasive effects may emerge.
This is partly due to the fact that especially the perception of form
of information is most likely not a conscious process involving in-depth
processing and cognitive appraisal but a rather automatic
and non-cognitive process. Hence, if one influences the
conditions of perceptual processing or some early-level cognitive
processing of multimodal information, no clear models are
available for explaining and predicting persuasion. Also, the exact
influence of the amount of cognitive resources employed during
early and later processing of a persuasive message remains
unknown. It is most evident that case studies with particular
application are needed to verify such effects.
However, the authors present one possible way of seeing
persuasion mostly via a link to transient emotional states and
moods immediately before, during and after processing
information presented through a Psychological Customization
system. Yet, based on this approach the claim for more efficient
persuasion in each application area, such as using Psychological
Customization in advertising or e-commerce product information
remains a complex task. Despite this difficulty, we now present a
possible selection of relevant psychological principles related to
perceptual processing and persuasion of advertising and e-commerce
product information.
First, a similarity-attraction process may arise between the
presented information and the personality of the user that may
lead to the information being processed more fully [i.e., trait-congruency
hypothesis; see 54]. That is, users are likely to be
attracted to information with content and formal characteristics
manifesting a personality similar to their own [see e.g., 21].
Second, the decrease of cognitive load in perceptual processing
(i.e., high processing fluency) may induce a feeling of
pleasantness that may label the information processed [for a
review, see 65]. That is, fluent stimuli are associated with
increased liking and positive affective responses as assessed by
facial EMG, for example. Third, the creation of specific
emotional reactions and moods varying in valence and arousal
may label the information processed; here the effects may depend
on the type of emotion. For instance, mood-congruency may
provide more intensive engagement with the information
presented when the mood induced by the information processed
matches a pre-existing mood of the user [see 49]. Fourth, the
emotional reactions may induce increased attention that may lead
to more in-depth processing of information [e.g., 26]. Fifth, as
suggested by excitation transfer theory, arousal induced by a
processed stimuli influences the processing of subsequent stimuli
[see 71, 72].
Sixth, according to selective-exposure theory, individuals are
motivated to make media choices in order to regulate their
affective state [i.e., to maintain excitatory homeostasis; 73]. This
may mean that people use also e-commerce product information
to manage their moods, i.e. neutralize an unwanted mood, such as
depression by engaging with exciting and positive product
information. Users may also intensify an existing mood by
selecting product information content that may add to the present
mood. Seventh, affective priming research indicates that the
valence of subliminally exposed primes (e.g., facial expressions)
influences the affective ratings of subsequent targets [40, 43],
including video messages presented on a small screen [48].
Eighth, the perceived personal relevance of the particular
information to the user exerts a robust influence on message
processing and involvement [64]. This means that if the user is
interested in the product described in the information presented,
he will be quite involved when processing the information and
hence his memory of the product will be enhanced. Consequently,
it has been shown that information tailored to the needs and
contexts of users often increases the potential for attitude and
behavior change [5, 11, 41, 66, 67]. Further, there is quite a lot of
research indicating that, when compared to video form, text has a
greater capacity to trigger active or systematic message
processing and is perceived as more involving [see 44]; this
depends on the mood of the user, however [48]. Ninth, some
emotional states and moods lead to secondary effects related to
decision-making, judgment and behavior [4, 10, 20].
It then seems that indeed a relevant area to focus on regarding
persuasion with Psychological Customization is emotion (arousal
and valence) immediately before, during and right after viewing
product information and ads.
One may focus on "primitive" emotional responses or emotions
requiring more extensive cognitive appraisal and processing. Both
of these types of emotions can be linked to various psychological
consequences. Consequently, with emotionally loaded
personalized information products one may focus on i) creating
immediate and primitive emotional responses, ii) creating mood
and iii) indirectly influencing secondary effects of emotion and
mood, such as attention, memory, performance and judgment.
Known psychological mechanisms used to create desired
emotions or moods would be for instance similarity attraction
(trait congruency), decrease of cognitive load (high processing
fluency), mood congruency, excitation transfer, mood
management and affective priming.
These mechanisms are not without problems as they may have
also opposite effects. For instance mood congruency may
decrease attention and hence lessen the mobilization of cognitive
resources in processing a persuasive message. Also, even though
emotion is good candidate to look for a strong link to persuasion,
the exact nature of this link is unclear.
The key idea of using emotion as a hypothesized gateway to
persuasion would be that more in-depth processing of information
caused by arousal, valence, attention or involvement may lead to
increased memory and perceived trustworthiness of information
and also influence attitudes towards the product in question [e.g.
65]. This in turn may lead to instant behavior, such as buying
online, clicking through an ad or purchasing the item later in a
department store based on long-term memory schemata. It should
be noted that this view is based on empiric evidence of the
psychological effects and their consequences in general, but they
have not yet been validated with the use of e-commerce systems
that personalize the form of information for persuasion.
Hence, it would be most beneficial to capture the users emotional
states or mood before the user starts to browse a particular piece
of product information to be able to automatically realize various
effects with adaptation of the form of information and track the
changes of the online behavior of the user.
249
249
3.2 Persuasive Advertising
The effectiveness of persuasion in advertisements in general is a
complex issue. Subliminal priming, use of commonly known
symbols, matching the advertisement to basic biological needs,
such as food, shelter and sex, maximizing the credibility of the
message, telling a compelling story, creating a desirable image of
the perceiver with the product, placing TV-ads immediately after
emotional (arousal) and attentional peaks of TV-programming
and other approaches have been widely used. However, research
into effectiveness of the form of presentation of advertising is not
widely available in the scientific community. Moreover, little
research has been done to understand the psychological
effectiveness of online advertising. It should be also noted that
advertising may be mostly a creative and design-driven high-speed
field of industrial production in which various types of
authors and artists collaborate like in film-production to make the
advertisement rather than a field filled with scientists attempting
to analyze the advertisements and their effects in great detail.
In internet-based advertising the advertisement is typically
presented on a web page as a banner. The banner is embedded in
editorial content, such as the front page of a magazine or online
newspaper. The banners are often placed according to the number
and demography of the visitors on a particular section of editorial
content. This means a best guess is taken as to what may be the
most efficient and visible way of placing the banner based on
previous knowledge of the behavior of desired user segments on
the website.
It seems that ads placed in context work best also online. This
means that an ad that is related to the editorial content it is
displayed with is more efficiently persuasive [3]. Another issue is
that text-based ads online may work better than only graphics.
This implies that most ads contain text based over a graphical
surface. Overall, very simple principles (larger is better etc.) seem
to guide people's choices: e.g., larger ads are thought to be more
appealing and affective.
Further, in mobile contexts personalized advertising has been
studied from the point of view of targeting users by emotions in
addition to location and other relevant factors [19].
However, the exact transient psychological influence of a
particular piece of editorial content the online advertisement is
displayed with remains unknown. It is possible that the editorial
content repels the user and the advertisement is also labeled by
this emotion. It is also possible that the editorial content induces a
positive emotion and the advertisement gets an advantage based
on this emotional state. The emotional tone of the advertisements
and editorial content may also interact. For example, Kamins,
Marks, and Skinner (1991) showed that commercials that are
more consistent in emotional tone with the TV program perform
better as measured by likeability and purchase intention ratings
than those that are inconsistent in tone. Sometimes advertisements
are changed in real-time per type of user as the system recognizes
a user segment to which a certain banner has been targeted.
However, what is lacking here is i) more detailed information of
the type of user (such as what may be the most efficient way to
influence him psychologically) and ii) what may be the
psychological impact on the same user of the editorial content
within which the banner is placed. [22]
With a Psychological Customization system some of these gaps
may be at least indirectly addressed as presented in Table 2.
Table 2. Technological possibilities of persuasive advertising
with Psychological Customization.
Layer of
Technology
Adaptations of Advertising Banners
1. Physical
-multimedia
PC or mobile
device
-The advertisement substance and form may be
matched to the technology used by lifestyle segments
or other means of segmentation (hip ads for mobile
phones etc.)
-Mobile device: user changeable covers in colors and
shapes that facilitate emotion
2. Code
-Windows-type
user
interface
-Mouse, pen,
speech,
-The user interface elements (background color,
forms, shapes, directions of navigation buttons etc.)
may be varied in real-time per page per user in which
a certain advertisement is located to create various
emotions and ease of perceptual processing
-audio channel may be used to create emotional
effects (using audio input/output sound, varying
pitch, tone, background music, audio effects etc.).
3. Content
A. Substance
- Fixed
multimedia
content
-The editorial content may be matched with the ad
-The content of the ad may be matched to the users
based on various factors (interests, use history,
demography, personality etc.)
-Adding subliminal extra content to create emotion
B. Form
Modality
-Multimedia
-Modality may be matched to cognitive style or preexisting
mood of the enable easier processing.
-Background music, audio effects or ringing tones
may be used as a separate modality to facilitate
desired emotions and moods.
-Animated text can be used to create more efficient
processing of text facilitate some emotional effects.
Visual
presentation
-Emotionally evaluated and positioned layout designs
and templates for ads (colors, shapes and textures)
may be utilized per type of user segment
Structure
-temporal,
other
-Offering emotionally evaluated and positioned
narrative templates for creating emotionally engaging
stories.
Based on Table 2 a Psychological Customization system may
operate by trying to optimize desired emotional effects that may
be related to persuasion. The content provider, such as a media
company, is able to set desired effects per type of user group and
advertiser need by using a Psychological Customization system.
Also, the placement of ads within desired editorial contexts may
be utilized with a more developed system. When a user logs in
with his profile already to the database of the content provider the
system will start real-time personalization of form of information.
As the user has logged in, the front page of the service may be
altered for him according to advertiser needs. As the user
navigates the system and consumes information the system
follows ready-set effects to be realized to the user. It is clear that
such a scenario is difficult, but if it is done in a simple enough
manner it may be that the persuasive efficiency of online
advertising may increase.
3.3 Persuasive e-Commerce Product
Information
Personalized e-commerce has not been studied widely. It has been
found that while personalization of the content substance
displayed to each user may provide value, the users have a strong
motivation to understand why the system is displaying a
particular piece of information for them. Also, users wish to be in
control of their user profiles. [see 1, 23]
Hence, it seems that users are at least partly suspicious to the
system adapting the substance of information to them. However,
250
250
in many cases it may be possible to adapt the form of information
in personalized applications in conjunction to content substance
variation or even without it. The adaptation of form of
information to the user may even be a more transparent way of
personalizing user-system- interaction as the user most likely does
not question the form of a particular substance. Hence, there are
emerging possibilities for personalization and customization in
this area.
There are at least two different types of advanced e-commerce
systems commonly used: i) systems using recommendation
engines and other personalization features to present information
in a media-like manner and ii) systems using persuasive interface
agents, creating a relationship between the user and the agent. The
focus here is mostly on presentation of product information, such
as information (product properties, comparisons, pricing,
functionalities and other information) of a new car, digital
camera, computer or garment. Although, in the context of product
presentation, users have usually been suggested to prefer a
combination site including pictures and text [e.g. 35], individual
differences in their preferences are likely to occur.
The technological possibilities for persuasive presentation of
product information are much like those presented for persuasive
advertising seen in Table 2. In other words, different layers of
technology may be adapted to the user of an e-commerce system
to create various psychological effects when presenting product
information.
With personalized e-commerce systems for product information
presentation one may facilitate positive emotional responses for
instance by selecting the modalities of the information to be
displayed according to the processing styles and alter visual
layouts of the interface according to the personalities of the users.
The ease of processing information and the similarity-attraction
between visual layouts and personalities may create positive
emotional states. As for brand awareness one may indirectly
influence memory with the facilitation of positive emotion and
increase memory-based performance on the task such as brand
recognition and recall. By increasing attention one may increase
the likelihood of the user of an e-commerce system to learn
product information more efficiently. Positive emotion and mood
also has the effect of making the user adapt a less risk-prone
approach to making decisions [20]. This may be used to present
product information in a familiar manner creating a safe
atmosphere around the product to make it more desirable when
the user is making purchasing decisions in a positive mood.
Psychological Customization may be used for persuasion also
with recommendation systems. The system knows the users
profile, such as type of personality, and the desired psychological
effect is set to positive emotion in as many page-views of the
recommendations as possible. The user starts using the system
and finds an interesting product that the system recommends to
her. The form of the recommendation information is tailored to
the users profile and desired psychological effect in real-time
when the page uploads to make the realization of positive emotion
as probable as possible. The system may select the modality of
recommendation from text to audio, or from audio to animation;
the system may change the background colors of the page and
modify the shape and color of the navigation buttons, for instance.
In this case, the system will try to do everything possible to
facilitate positive emotion but change the substance of the
recommendation itself. Naturally in some cases depending on
type of user and the type of recommendation, the available
databases of recommendation information and the available
means of Psychological Customization of form of
recommendation information, the effect to be achieved is more or
less likely to occur. However, even effects that provide some
percentages or even tens of percents of more targeted positive
emotion may make a difference in persuasion and hence attitudes
towards the product and buying behavior. This is especially true if
the recommendation system website has masses of users and
hence even a slight increase in sales effectiveness may add up to
significant amounts of revenue.
Further, one may discuss interface agents for product information
presentation. Often with interface agents an illusion of being in
interaction with another human being is created in the user via
using for example animated agents that seemingly exhibit various
human properties, such as gender, personality, politeness, group
membership and other factors. Here one possible application
would be to add an agent to float atop of a page with product
information to comment or recommend it, to aid the user in
navigation and finding interesting information and to act as a
feedback channel for the user, such as collecting the users
interest profile or other situational relevant information.
It is known that both the substance of the interaction (what is
being sold, or what information is presented, and what the agent
says, or how it acts) and the form of interaction (how information
is presented, what is the appearance and personality or other
factors of the agent) influence for instance trust, persuasiveness,
emotion and liking of the transaction [e.g. 51].
What Psychological Customization may add here may be more
systematic and efficient personalization of the way of presenting
information together with customizing the selected appearance
and other features of the agent in without actually changing the
substance of the interaction, i.e. what the agent says or what
product information is presented, for instance.
CONCLUSION
The authors believe that no other comprehensive framework of
varying form of information to systematically create emotional
and cognitive effects has been presented, specifically in
persuasive presentation of online advertising and product
information in e-commerce sites. Differences to other approaches
to influencing user experience in general are various. Usability
studies traditionally address the question of how to make difficult
technology easy to use. Usability is at least partly founded on the
idea of optimal human-machine performance, i.e. how well a user
can manipulate and control a machine. However, there is a
growing conviction that, in order to ensure high user satisfaction
usability is not sufficient [see 12, 24].
The approach to system design presented in this paper may be
beneficial to the fields of e-commerce and online advertising
because: i) it provides a possibility to personalize the form of
information that may be more transparent and acceptable to the
users than adapting the substance of information, ii) it offers a
way of more systematically accessing and controlling transient
psychological effects of users of e-commerce and advertisement
displaying systems, iii) it offers possibilities to more efficiently
persuade and consequently influence behavior of individual users
and iv) it is compatible with existing and new systems
(recommendation engines, click-through-systems, other) as an
add-on or a middleware layer in software with many potential
application areas.
251
251
The potential drawbacks of the framework include the following:
i) it may be costly to build the design-rule databases and actually
working real-life systems for creating systematic psychological
effects, ii) the rule-databases may have to be adapted also locally
and culturally, iii) the method needed to create a rule-database is
not easy to use and may be suspect to ecological validity (eye-tracking
, behavioral and psychophysiological measures, self-report
, field tests are needed to verify laboratory results etc.) and
iv) if the system works efficiently it may raise privacy issues,
such as the intimacy of a personal psychological user profile
(personality, cognitive style, values, other). Also ethical issues
related to mind-control or even propaganda may arise.
It should be noted that to build a smoothly functioning
Psychological Customization system one should do much more
research and gain more evidence of the systematic relationships of
user profiles, information forms and psychological effects.
However, in our research for the past four years we have found
many feasible rules for personalization for psychological effects.
Regarding future research, content management technologies
should be elaborated to provide for the platform that prototypes
can be built on. Consequently, we aim to build, evaluate and
field-test prototypes of Psychological Customization in various
areas, specifically in mobile, urban ad-hoc contexts and situations
related to mobile advertising and e-commerce, but also other
areas such as mobile gaming communities, mobile content,
mobile messaging, knowledge work systems and city navigation.
REFERENCES
[1] Alpert, S. R.; Karat, J.; Karat, C-M; Brodie, C and Vergo,
J.G. (2003) User Attitudes Regarding a User-Adaptive
eCommerce Web Site. User Modeling and User-Adapted
Interaction
13
(4): 373-396, November 2003.
[2] Bandura (1977) Social learning theory. Englewood Cliffs,
NJ: Prentice-Hall.
[3] Baudisch, P. and Leopold, D. (2000) Attention, indifference,
dislike, action: Web advertising involving users. Netnomics
Volume 2 , Issue 1 2000, pp. 75-83.
[4] Brave S. and Nass, C. (2003). Emotion in human-computer
interaction. In Jacko, J.A. and Sears, A. (Ed.), The Human-Computer
Interaction Handbook. Fundamentals, Evolving
Technologies and Emerging Applications. (pp. 81-96).
London : Lawrence Erlbaum Associates.
[5] Beniger, J. R. (1987) Personalization of mass media and the
growth of pseudo-community. Communication Research,
14(3), pp 352-371.
[6] Benkler, Y. (2000) From Consumers to Users: Shifting the
Deeper Structures of Regulation. Federal Communications
Law Journal 52, 561-63.
[7] Billmann, D. (1998) Representations. In Bechtel, W. and
Graham, G. (1998) A companion to cognitive science, 649-659
. Blackwell publishers, Malden, MA.
[8] Biocca, F. and Levy, M. (1995) Communication in the age of
virtual reality. Lawrence Erlbaum, Hillsdale, NJ.
[9] Cuperfain, R. and Clarke, T. K. (1985) A new perspective on
subliminal perception. Journal of Advertising, 14, 36-41.
[10] Clore, G. C. and Gasper, K. (2000). Feeling is believing.
Some affective influences on belief. In Frijda, N.H.,
Manstead, A. S. R. and Bem, S. (Ed.), Emotions and beliefs:
How feelings influence thoughts (pp. 10-44).
Paris/Cambridge: Editions de la Maison des Sciences de
lHomme and Cambridge University Press.
[11] Dijkstra, J. J., Liebrand, W.B.G and Timminga, E. (1998)
Persuasiveness of expert systems. Behavior and Information
Technology, 17(3), pp 155-163.
[12] Dillon, A. (2001). Beyond usability: process, outcome and
affect in human computer interactions. Online:
http://www.ischool.utexas.edu/~adillon/publications/beyond
_usability.html.
[13] Egan, D. E. (1988). Individual differences in human-computer
interaction. In: M. Helander (Ed.), Handbook of
Human-Computer Interaction, p. 543 568. Elsevier, New
York.
[14] Eysenck, M. (1994) Individual Differences: Normal and
Abnormal. New Jersey: Erlbaum.
[15] Fogg, B.J. (2003) Motivating, influencing and persuading
users. In Jacko, J.A. and Sears, A. (Ed.), The Human-Computer
Interaction Handbook. Fundamentals, Evolving
Technologies and Emerging Applications. (pp. 81-96).
London : Lawrence Erlbaum Associates.
[16] Fogg, B. J. (2002) Persuasive technology. Using computers
to change what we think and do. Morgan Kaufmann
Publishers, New York.
[17] Ford, M. E. (1992) Motivating humans: goals, emotions,
personal agency beliefs. Newbury Park, Ca: Sage.
[18] Hampson, S. E. & Colman, A. M. (Eds., 1995) Individual
differences and personality. London: Longman.
[19] Hristova, N. and O'Hare, G. M. P. (2004) Ad-me: Wireless
Advertising Adapted to the User Location, Device and
Emotions.
Proceedings of the Proceedings of the 37th
Annual Hawaii International Conference on System
Sciences (HICSS'04) - Track 9 - Volume 9
[20] Isen, A. M. (2000). Positive affect and decision making. In
Lewis, M. and Haviland-Jones, J. M. (Ed.), Handbook of
emotions (2nd ed.) (pp. 417-435). New York: Guilford Press.
[21] Kallinen, K., & Ravaja, N. (in press). Emotion-related
effects of speech rate and rising vs. falling background music
melody during audio news: The moderating influence of
personality. Personality and Individual Differences.
[22] Kamins, M.A., Marks, L.J., & Skinner, D. (1991). Television
commercial evaluation in the context of program induced
mood: Congruency versus consistency effects. Journal of
Advertising, 20, 1-14.
[23] Karat, M-C., Blom, J. and Karat, J. (in press) Designing
Personalized User Experiences in eCommerce. Dordrecht:
Kluwer.
[24] Karat, J. (2003) Beyond task completion: Evaluation of
affective components of use. In Jacko, J.A. and Sears, A.
(Ed.), The Human-Computer Interaction Handbook.
Fundamentals, Evolving Technologies and Emerging
Applications. (pp. 81-96). London : Lawrence Erlbaum
Associates.
[25] Kihlstrm, J. F., Barnhardt, T. M. and Tataryn, D. J. (1992)
Implicit perception. In Bornstein, R. F. and Pittmann, T. S.
252
252
(eds.) Perception without awareness. Cognitive, clinical and
social perspectives, 17-54. Guilford, New York.
[26] Krohne, H.W., Pieper, M., Knoll, N., & Breimer, N. (2002).
The cognitive regulation of emotions: The role of success
versus failure experience and coping dispositions. Cognition
and Emotion, 16, 217-243.
[27] Krosnick, J. A., Betz, A. L., Jussim, J. L. and Lynn, A. R.
(1992) Subliminal conditioning of attitudes. Personality and
Social Psychology Bulletin, 18, 152-162.
[28] Laarni, J. (2003). Effects of color, font type and font style on
user preferences. In C. Stephanidis (Ed.) Adjunct
Proceedings of HCI International 2003. (Pp. 31-32). Crete
University Press, Heraklion.
[29] Laarni, J. (2002). Searching for optimal methods of
presenting dynamic text on different types of screens. In:
O.W. Bertelsen, S. Bdker & K. Kuutti (Eds.), Tradition and
Transcendence. Proceedings of The Second Nordic
Conference on Human-Computer Interaction, October 19-23,
2002, Arhus, Denmark (Pp. 217 220).
[30] Laarni, J. & Kojo, I.(2001). Reading financial news from
PDA and laptop displays. In: M. J. Smith & G. Salvendy
(Eds.) Systems, Social and Internationalization Design
Aspects of Human-Computer Interaction. Vol. 2 of
Proceedings of HCI International 2001. Lawrence Erlbaum,
Hillsdale, NJ. (Pp. 109 113.)
[31] Laarni, J., Kojo, I. & Krkkinen, L. (2002). Reading and
searching information on small display screens. In: D. de
Waard, K. Brookhuis, J. Moraal, & A. Toffetti (Eds.),
Human Factors in Transportation, Communication, Health,
and the Workplace. (Pp. 505 516). Shake, Maastricht. (On
the occasion of the Human Factors and Ergonomics Society
Europe Chapter Annual Meeting in Turin, Italy, November
2001).
[32] Lang, A. (1990) Involuntary attention and physiological
arousal evoked by structural features and mild emotion in
TV commercials. Communication Research, 17 (3), 275-299.
[33] Lang, A., Dhillon, P. and Dong, Q. (1995) Arousal, emotion
and memory for television messages. Journal of
Broadcasting and Electronic Media, 38, 1-15.
[34] Lang, A., Newhagen, J. and Reeves. B. (1996) Negative
video as structure: Emotion, attention, capacity and memory.
Journal of Broadcasting and Electronic Media, 40, 460-477.
[35] Lightner, N.J., & Eastman, C. (2002). User preference for
product information in remote purchase environments.
Journal of Electronic Commerce Research [Online], 3,
Available from
http://www.csulb.edu/web/journals/
jecr/issues/20023/paper6.pdf
.
[36] Lombard, M. and Ditton, T. (2000) Measuring presence: A
literature-based approach to the development of a
standardized paper-and-pencil instrument. Project abstract
submitted to Presence 2000: The third international
workshop on presence.
[37] Lombard, M., Reich, R., Grabe, M. E., Bracken, C. and
Ditton, T. (2000) Presence and television: The role of screen
size. Human Communication Research, 26(1), 75-98.
[38] Meyers-Levy, J. & Malaviya, P. (1998). Consumers'
processing of persuasive advertisements. An integrative
framework of persuasion theories. Journal of Marketing 63,
4560.
[39] McGuire, W. J. (1989) Theoretical foundations of
campaigns. In R.E. Rice and C.K. Atkin (eds.) Public
communication campaigns (2
nd
ed, pp 43-65). Newbury
Park, Ca: Sage.
[40] Monahan, J.L. (1998). I don't know it but I like you: The
influence of nonconscious affect on person perception.
Human Communication Research, 24, 480-500.
[41] Nowak, G. J., Shamp, S., Hollander, B., Cameron, G. T.,
Schumann, D. W. and Thorson, E. (1999) Interactive media:
A means for more meaningful advertising? Advertising and
consumer psychology. Mahwah, NJ: Lawrence Erlbaum.
[42] Petty, R.E., & Cacioppo, J.T. (1986). Communication and
persuasion: Central and peripheral routes to attitude change.
New York: Springer-Verlag.
[43] Murphy, S.T., Monahan, J.L., & Zajonc, R.B. (1995).
Additivity of nonconscious affect: Combined effects of
priming and exposure. Journal of Personality and Social
Psychology, 69, 589-602.
[44] Pfau, M., Holbert, R.L., Zubric, S.J., Pasha, N.H., & Lin,
W.-K. (2000). Role and influence of communication
modality in the process of resistance to persuasion. Media
Psychology, 2, 1-33.
[45] Ravaja, N. (2004). Effects of a small talking facial image on
autonomic activity: The moderating influence of
dispositional BIS and BAS sensitivities and emotions.
Biological Psychology, 65, 163-183.
[46] Ravaja, N. (in press). Contributions of psychophysiology to
media research: Review and recommendations. Media
Psychology.
[47] Ravaja, N., & Kallinen, K. (in press). Emotional effects of
startling background music during reading news reports: The
moderating influence of dispositional BIS and BAS
sensitivities. Scandinavian Journal of Psychology.
[48] Ravaja, N., Kallinen, K., Saari, T., & Keltikangas-Jrvinen,
L. (in press). Suboptimal exposure to facial expressions
when viewing video messages from a small screen: Effects
on emotion, attention, and memory. Journal of Experimental
Psychology: Applied.
[49] Ravaja, N., Saari, T., Kallinen, K., & Laarni, J. (2004). The
Role of Mood in the Processing of Media Messages from a
Small Screen: Effects on Subjective and Physiological
Responses. Manuscript submitted for publication.
[50] Reardon, K. R. (1991) Persuasion in practice. Newbury Park,
Ca: Sage.
[51] Reeves, B. and Nass, C. (1996) The media equation. How
people treat computers, television and new media like real
people and places. Cambridge University Press, CSLI,
Stanford.
[52] Riding, R. J. and Rayner, S. (1998) Cognitive styles and
learning strategies. Understanding style differences in
learning and behavior. David Fulton Publishers, London.
253
253
[53] Riecken, D. (2000) Personalized views on personalization.
Communications of the ACM, V. 43, 8, 27-28.
[54] Rusting, C.L. (1998). Personality, mood, and cognitive
processing of emotional information: Three conceptual
frameworks. Psychological Bulletin, 124, 165-196.
[55] Saari, T. (1998) Knowledge creation and the production of
individual autonomy. How news influences subjective
reality. Reports from the department of teacher education in
Tampere university. A15/1998.
[56] Saari, T. (2001) Mind-Based Media and Communications
Technologies. How the Form of Information Influences Felt
Meaning. Acta Universitatis Tamperensis 834. Tampere
University Press, Tampere 2001.
[57] Saari, T. (2002) Designing Mind-Based Media and
Communications Technologies. Proceedings of Presence
2002 Conference, Porto, Portugal.
[58] Saari, T. (2003a) Designing for Psychological Effects.
Towards Mind-Based Media and Communications
Technologies. In Harris, D., Duffy, V., Smith, M. and
Stephanidis, C. (eds.) Human-Centred Computing:
Cognitive, Social and Ergonomic Aspects. Volume 3 of the
Proceedings of HCI International 2003, pp. 557-561.
[59] Saari, T. (2003b) Mind-Based Media and Communications
Technologies. A Framework for producing personalized
psychological effects. Proceedings of Human Factors and
Ergonomics 2003 -conference. 13.-17.10.2003 Denver,
Colorado.
[60] Saari, T. (in press, a) Using Mind-Based Technologies to
facilitate Positive Emotion and Mood with Media Content.
Accepted to proceedings of to Positive Emotion, 2
nd
European Conference. Italy, July 2004.
[61] Saari, T. (in press, b) Facilitating Learning from Online
News with Mind-Based Technologies. Accepted to
proceedings of EDMedia 2004, Lugano, Switzerland.
[62] Saari, T. and Turpeinen, M. (in press, a) Towards
Psychological Customization of Information for Individuals
and Social Groups. In Karat, J., Blom, J. and Karat. M.-C.
(eds.) Personalization of User Experiences for eCommerce,
Kluwer.
[63] Saari, T. and Turpeinen, M. (in press, b) Psychological
customization of information. Applications for personalizing
the form of news. Accepted to proceedings of ICA 2004, 27.31
.5. 2004, New Orleans, USA.
[64] Schneider, S.L., & Laurion, S.K. (1993). Do we know what
we've learned from listening to the news? Memory and
Cognition, 21, 198-209.
[65] Schwarz, N. (in press). Meta-cognitive experiences in
consumer judgment and decision making. Journal of
Consumer Psychology.
[66] Stretcher, V. J., Kreutzer, M., Den Boer, D.-J., Kobrin, S.,
Hospers, H. J., and Skinner, C. S. (1994) the effects of
computer-tailored smoking cessation messages in family
practice settings. Journal of Family Practice, 39(3), 262-270.
[67] Stretcher, V. J. (1999) Computer tailored smoking cessation
materials: A review and discussion. Special issue: Computer
tailored education. Patient Education & Counceling, 36(2),
107-117.
[68] Turpeinen, M. (2000) Customizing news content for
individuals and communities. Acta Polytechnica
Scandinavica. Mathematics and computing series no. 103.
Helsinki University of Technology, Espoo.
[69] Turpeinen, M. and Saari, T. (2004) System Architechture for
Psychological Customization of Information. Proceedings of
HICSS-37- conference, 5.-8.1. 2004, Hawaii.
[70] Vecchi, T., Phillips, L. H. & Cornoldi, C. (2001). Individual
differences in visuo-spatial working memory. In: M. Denis,
R. H. Logie, C. Cornoldi, M. de Vega, & J. Engelkamp
(Eds.), Imagery, language, and visuo-spatial thinking.
Psychology Press, Hove.
[71] Zillmann, D. (1971). Excitation transfer in communication-mediated
aggressive behavior. Journal of Experimental
Social Psychology, 7, 419-434.
[72] Zillmann, D. (1978). Attribution and misattribution of
excitatory reactions. In J.H. Harvey, W.J. Ickes, & R.F. Kidd
(Eds.), New directions in attribution research (Vol. 2, pp.
335-368). Hillsdale, NJ: Lawrence Erlbaum Associates.
[73] Zillmann, D., & Bryant, J. (1985). Affect, mood, and
emotion as determinants of selective exposure. In D.
Zillmann & J. Bryant (Eds.), Selective exposure to
communication (pp. 157-190). Hillsdale, NJ: Lawrence
Erlbaum.
254
254 | personalization emotion;persuasion;advertising;E-commerce |
156 | Publicly Verifiable Ownership Protection for Relational Databases | Today, watermarking techniques have been extended from the multimedia context to relational databases so as to protect the ownership of data even after the data are published or distributed. However , all existing watermarking schemes for relational databases are secret key based , thus require a secret key to be presented in proof of ownership. This means that the ownership can only be proven once to the public (e.g., to the court). After that, the secret key is known to the public and the embedded watermark can be easily destroyed by malicious users. Moreover, most of the existing techniques introduce distortions to the underlying data in the watermarking process, either by modifying least significant bits or exchanging categorical values. The distortions inevitably reduce the value of the data. In this paper, we propose a watermarking scheme by which the ownership of data can be publicly proven by anyone, as many times as necessary. The proposed scheme is distortion-free , thus suitable for watermarking any type of data without fear of error constraints. The proposed scheme is robust against typical database attacks including tuple/attribute insertion/deletion, ran-dom/selective value modification, data frame-up, and additive attacks | INTRODUCTION
Ownership protection of digital products after dissemination has
long been a concern due to the high value of these assets and the
low cost of copying them (i.e., piracy problem). With the fast development
of information technology, an increasing number of digital
products are distributed through the internet. The piracy problem
has become one of the most devastating threats to networking systems
and electronic business. In recent years, realizing that "the law
does not now provide sufficient protection to the comprehensive
and commercially and publicly useful databases that are at the heart
of the information economy" [12], people have joined together to
fight against theft and misuse of databases published online (e.g.,
parametric specifications, surveys, and life sciences data) [32, 4].
To address this concern and to fight against data piracy, watermarking
techniques have been introduced, first in the multimedia
context and now in relational database literature, so that the ownership
of the data can be asserted based on the detection of watermark
. The use of watermark should not affect the usefulness of
data, and it must be difficult for a pirate to invalidate watermark detection
without rendering the data much less useful. Watermarking
thus deters illegal copying by providing a means for establishing
the original ownership of a redistributed copy [1].
In recent years, researchers have developed a variety of watermarking
techniques for protecting the ownership of relational databases
[1, 28, 26, 29, 13, 19, 20, 2] (see Section 5 for more on related
work). One common feature of these techniques is that they are secret
key based, where ownership is proven through the knowledge
of a secret key that is used for both watermark insertion and detection
. Another common feature is that distortions are introduced
to the underlying data in the process of watermarking. Most techniques
modify numerical attributes [1, 28, 29, 13, 19, 20], while
others swap categorical values [26, 2]. The distortions are made
such that the usability of data for certain applications is not affected
and that watermark detection can be performed even in the
presence of attacks such as value modification and tuple selection.
The above two features may severely affect the application of
watermarking techniques for relational databases. First, the secret
key based approach is not suitable for proving ownership to the
public (e.g., in a court). To prove ownership of suspicious data,
the owner has to reveal his secret key to the public for watermark
detection. After being used one time, the key is no longer secret.
With access to the key, a pirate can invalidate watermark detection
by either removing watermarks from protected data or adding a
false watermark to non-watermarked data.
Second, the distortions that are introduced in the process of watermarking
may affect the usefulness of data. Even though certain
kind of error constraints (e.g., means and variances of watermarked
attributes) can be enforced prior to or during the watermarking
process, it is difficult or even impossible to quantify all
possible constraints, which may include domain constraint, unique-ness
constraint, referential integrity constraint, functional dependencies
, semantic integrity constraint, association, correlation, car-dinality
constraint, the frequencies of attribute values, and statisti
cal distributes. In addition, any change to categorical data may be
considered to be significant. Another difficulty is that the distortions
introduced by watermarking cannot be reduced arbitrarily. A
tradeoff has to be made between watermark distortions and the robustness
of watermark detection (roughly speaking, the more distortions
introduced in the watermarking process, the more likely
that a watermark can be detected in the presence of database attacks
).
In this paper, we attempt to design a new database watermarking
scheme that can be used for publicly verifiable ownership protection
and that introduces no distortions. Our research was motivated
in part by certain aspects of public key watermarking schemes in
the multimedia context, yet it is fundamentally different and particularly
customized for relational databases (see also Section 5 for related
work). Our scheme has the following unique properties. First,
our scheme is publicly verifiable. Watermark detection and ownership
proof can be effectively performed publicly by anyone as
many times as necessary. Second, our scheme introduces no errors
to the underlying data (i.e., it is distortion-free); it can be used for
watermarking any type of data including integer numeric, real numeric
, character, and Boolean, without fear of any error constraints.
Third, our scheme is efficient for incremental updating of data. It
is designed to facilitate typical database operations such as tuple
insertion, deletion, and value modification. Fourth, our scheme is
robust. It is difficult to invalidate watermark detection and ownership
proof through typical database attacks and other attacks. With
these properties, we believe that our watermarking technique can
be applied practically in the real world for the protection of ownership
of published or distributed databases.
The rest of the paper is organized as follows. Section 2 presents
our watermarking scheme, which includes watermark generation
and detection. Section 3 studies how to prove ownership publicly
using a watermark certificate. It also investigates certificate revocation
and incremental update in our scheme. Section 4 analyzes the
robustness of our scheme and the tradeoff between its robustness
and overhead. Section 5 comments on related work, and section 6
concludes the paper.
THE SCHEME
Our scheme watermarks a database relation R whose schema is
R(P, A
0
, . . . , A
-1
), where P is a primary key attribute (later we
discuss extensions for watermarking a relation that does not have
a primary key attribute). There is no constraint on the types of
attributes used for watermarking; the attributes can be integer numeric
, real numeric, character, Boolean, or any other types. Attributes
are represented by bit strings in computer systems. Let
denote the number of tuples in relation R. For each attribute of
a tuple, the most significant bit (MSB) of its standard binary representation
may be used in the generation of a watermark. It is
assumed that any change to an MSB would introduce intolerable
error to the underlying data value. For ease of referencing, Table 1
lists the symbols that will be used in this paper.
2.1 Watermark Generation
Let the owner of relation R possess a watermark key K, which
will be used in both watermark generation and detection. The watermark
key should be capable of publicly proving ownership as
many times as necessary. This is contrast to traditional watermarking
, where a watermark key is kept secret so that the database owner
can prove his ownership by revealing the key for detecting the watermark
. However, under that formation, the ownership can be publicly
proved only once. In addition, the key should be long enough
to thwart brute force guessing attacks to the key.
Algorithm 1 genW (R, K, ) // Generating watermark W for DB
relation R
1: for each tuple r in R do
2:
construct a tuple t in W with the same primary key t.P =
r.P
3:
for i=0; i < ; i= i+1 do
4:
j =
G
i
(K, r.P ) mod (the number of attributes in r)
5:
t.W
i
= MSB of the j-th attribute in r
6:
delete the j-th attribute from r
7:
end for
8: end for
9: return W
In our scheme, the watermark key is public and may take any
value (numerical, binary, or categorical) selected by the owner.
There is no constraint on the formation of the key. To reduce unnecessary
confusion, the watermark key should be unique to the
owner with respect to the watermarked relation. We suggest the
watermark key be chosen as
K = h(ID
|DB name|version|...)
(1)
where ID is the owner's identity, `|' indicates concatenation, and
h() is a cryptographic hash function (e.g., SHA-512) [22]. In the
case of multiple owners, the public key can be extended to be a
combination of all the owners' IDs or generated from them using a
threshold scheme. For simplicity, we assume that there is a single
owner of DB relation R in the following.
Our concept of public watermark key is different from that of a
public key in public key infrastructure (PKI) [16]. In the cryptography
literature, a public key is paired with a private key such that a
message encoded with one key can be decoded with its paired key;
the key pair is selected in a specific way such that it is computation-ally
infeasible to infer a private key from the corresponding public
key. In our watermarking scheme, there is no private key, and the
public watermark key can be arbitrarily selected. If the watermark
key is derived from the owner's ID as suggested, it is similar to
the public key in identity based cryptography [25, 3, 5], though the
owner does not need to request a private key from a key distribution
center (KDC).
The watermark key is used to decide the composition of a public
watermark W . The watermark W is a database relation whose
scheme is W (P, W
0
, . . . , W
-1
), where W
0
, . . . , W
-1
are binary
attributes. Compared to DB relation R, the watermark (relation
) W has the same number of tuples and the same primary
key attribute P . The number of binary attributes in W is a control
parameter that determines the number of bits in W , where
=
and . In particular, we call the watermark
generation parameter
.
Algorithm 1 gives the procedure genW (R, K, ) for generating
the watermark W . In the algorithm, a cryptographic pseudo-random
sequence generator (see chapter 16 in [24]) G is seeded
with the concatenation of watermark key K and the primary key
r.P for each tuple r in R, generating a sequence of numbers
{G
i
(K,
r.P )
}. The MSBs of selected values are used for generating the
watermark. The whole process does not introduce any distortions
to the original data. The use of MSBs is for thwarting potential
attacks that modify the data. Since the watermark key K, the watermark
W , and the algorithm genW are publicly known, anyone
can locate those MSBs in R that are used for generating W . However
, an attacker cannot modify those MSBs without introducing
intolerable errors to the data.
In the construction of watermark W , each tuple in relation R
0
R
database relation to be watermarked
number of tuples in relation R
number of attributes in relation R
W
database watermark (relation) generated in watermarking
(watermark generation parameter) number of binary attributes in watermark W
number of bits in W ; =
(watermark detection parameter) least fraction of watermark bits required for watermark detection
K
watermark key
Table 1: Notation in watermarking
Algorithm 2 detW (R
, K, , W, ) // Detecting watermark for
DB relation R'
1: match count=0
2: total coutn=0
3: for each tuple r in R
do
4:
get a tuple t in W with the same primary key t.P = r.P
5:
for i=0; i < ; i= i+1 do
6:
total count = total count +1
7:
j =
G
i
(K, r.P ) mod (the number of attributes in r)
8:
if t.W
i
= MSB of the j-th attribute in r then
9:
match count = match count +1
10:
end if
11:
delete the j-th attribute from r
12:
end for
13: end for
14: if match count/total count > then
15:
return true
16: else
17:
return false
18:
end if
contributes MSBs from different attributes that are pseudo-randomly
selected based on the watermark key and the primary key of the tuple
. It is impossible for an attacker to remove all of the watermark
bits by deleting some but not all of the tuples and/or attributes from
the watermarked data. The larger the watermark generation parameter
, the more robust our scheme is against such deletion attacks.
2.2 Watermark Detection
Our watermark detection is designed to be performed publicly
by anyone as many times as necessary. This is a notable difference
compared from previous approaches, which are secret key based. In
watermark detection, the public watermark key K and watermark
W are needed to check a suspicious database relation R
. It is
assumed that the primary key attribute has not been changed or
else can be recovered. If the primary key cannot be relied on, one
can turn to other attributes, as will be discussed in Section 2.4.
Algorithm 2 gives the procedure detW (R
, K, , W, ) for detecting
watermark W from relation R
, where is the watermark
generation parameter used in watermark generation, and is the
watermark detection parameter
that is the least fraction of correctly
detected watermark bits. Both parameters are used to control the
assurance and robustness of watermark detection, as will be ana-lyzed
in Section 4. The watermark detection parameter is in the
range of [0.5, 1). To increase the robustness of watermark detection
, we do not require that all detected MSBs in R
match the
corresponding bits in W , but that the percentage of the matches is
more than (i.e., match count/total count > in algorithm 2).
2.3 Randomized MSBs
Most modern computers can represent and process four primitive
types of data besides memory addresses: integer numeric, real
numeric, character, and Boolean. Regardless of its type, a data item
is represented in computer systems as a bit string. The MSB of the
bit string is the leftmost digit, which has the greatest weight. In a
signed numeric format (integer or real), the MSB can be the sign
bit, indicating whether the data item is negative or not
1
. If the sign
bit is not chosen (or there is no sign bit), the MSB can be the high
order bit (next to the sign bit; in floating point format, it is the leftmost
bit of exponent). For character or Boolean data, any bit can
be an MSB and we simply choose the leftmost one.
We assume that watermark bits generated from selected MSBs
are randomly distributed; that is, each MSB has the same probability
of 1/2 to be 1 or 0. This randomness is important in our
robustness analysis (see Section 4). If this is not the case, then we
randomize the MSBs by XOR'ing them with random mask bits. For
the MSB of the j-th attribute of tuple r, the corresponding mask bit
is the j-th bit of hash value h(K|r.P) if j , where is the bit-length
of hash output. In general, if (k - 1) < j k, the mask
bit is the (j - (k - 1))-th bit of hash value h
k
(K
|r.P). Since the
hash value is computed from the unique primary key, the mask bit
is random; thus, the MSB after masking is random. The random-ized
MSBs are then used in watermark generation and detection in
our scheme.
2.4 Discussion on Relations without Primary
Keys
Most watermarking schemes (e.g., [1, 20, 26, 2]) for relational
databases, including ours, depend critically on the primary key attribute
in the watermarking process. In the case that there is no
primary key attribute, or that the primary key attribute is destroyed
in malicious attacks, one can turn to other attributes and construct
a virtual primary key that will be used instead of the primary key in
the watermarking process. The virtual primary key is constructed
by combining the most significant bits of some selected attributes.
The actual attributes that are used to construct the virtual primary
key differ from tuple to tuple, and the selection of the attributes is
based on a key that could be the watermark key in the context of
this paper. The reader is referred to [19] for more details on the
construction of a virtual primary key.
Since the virtual primary key is constructed from the MSBs of
selected attributes, it is difficult to destroy the virtual primary key
through value modification or attribute deletion. However, unlike
a real primary key, the virtual primary key may not be unique for
each tuple; consequently, there could be multiple tuples in both R
and W sharing the same value of the primary key. In watermark
detection, the exact mapping between pairs of these tuples needs
1
In most commonly used storage formats, the sign bit is 1 for a
negative number and 0 for a non-negative number.
0
to be recovered (see line 4 in algorithm 2). This can be done as
follows. For each tuple r R with primary key r.P, compute
a tuple t the same way as in watermark generation, then choose
a tuple t
W such that t
is the most close (e.g., in terms of
Hamming distance) to t among the multiple tuples in W that share
the same primary key r.P . The number of tuples sharing the same
primary key value (i.e., the severity of the duplicate problem) can
be minimized, as shown in the above-mentioned work [19].
PUBLIC OWNERSHIP PROOF
We now investigate how to publicly prove ownership as many
times as necessary. If the watermark key K is kept secret with the
owner, the ownership proof can be done secretly; however, it can
be done only once in public since the key has to be revealed to the
public during this process.
The problem of public ownership proof was originally raised in
the multimedia context [15] (see section 5 for details); it has not
been studied in the literature of database watermarking. We note
that the requirements for watermarking relational data are different
from those for watermarking multimedia data. The former must be
robust against typical database alterations or attacks such as tuple
insertion, deletion, and value modification, while the latter should
be robust against multimedia operations such as compression and
transformation. An additional requirement for watermarking relational
data is that a watermarked relation should be updated easily
and incrementally.
Public ownership proof in our scheme is achieved by combining
watermark detection with a certificate.
3.1 Watermark Certificate
D
EFINITION
3.1. A watermark certificate C of relation R is
a tuple ID,K,h(W),h(R),T,DB-CA,Sig, where ID is the
identity of the owner of R, K is the owner's watermark key, W
is the public watermark, T is the validity information, DB-CA is
the trusted authority who signs the certificate by generating a signature
Sig.
Similar to the identity certificate [16] in PKI (or attribute certificate
[10] in PMI), which strongly binds a public key (a set of
attributes) to its owner with a validity period, the watermark certificate
strongly binds a watermark key, a watermark, and a DB
relation to its owner's ID with validity information. The validity
information is a triple T = T
origin
, T
start
, T
end
indicating the
original time T
origin
when the DB relation is first certified, the
starting time T
start
, and the ending time T
end
of this certificate in
the current binding. When the DB relation is certified for the first
time, T
origin
should be the same as T
start
. Compared with the
identity certificate or attribute certificate, the watermark certificate
not only has a validity period defined by T
start
and T
end
, but also
contains the original time T
origin
. The original time will be useful
in thwarting possible attacks that confuse ownership proof.
A comparison of the watermark certificate with the traditional
identity certificate is illustrated in Figure 1. The two kinds of certificates
share a similar structure except that the public key information
in the identity certificate is replaced by the watermark key,
watermark hash, and database hash in the watermark certificate.
In traditional identity certificate, the subject's public key is paired
with a private key known only to the subject. In the case of damage
or loss of the private key (e.g., due to collision attacks), the identity
certificate needs to be revoked before the expiration of the certificate
. In the watermark certificate, since there is no private key asso-ciated
with the public watermark key, it seems that there is no need
Version
Serial Number
Signature Algorithm
Issuer
Validity Period
Subject
Subject Public Key Info
Signature
Version
Serial Number
Signature Algorithm
DB-CA
Validity Info T
DB owner ID
Watermark Key K
Watermark Hash h(W)
DB hash h(R)
Signature Sig
Identity Certificate
Watermark Certificate
Figure 1: Relation between watermark and identity certificate
of certificate revocation. Nonetheless, certificate revocation and recertification
may be needed in the case of identity change, ownership
change, DB-CA signature compromise, and database update.
The role of DB-CA is similar to that of the traditional CA in PKI
in terms of authentication of an applicant's identity. The differences
are: (i) it binds the applicant's ID to the watermark key, watermark,
and watermarked data; and (ii) it confirms the original time when
the watermarked data was first certified. The original time is especially
useful in the case of recertification so as to thwart false
claims of ownership by a pirate. This is addressed in the following
subsection.
3.2 Public Verifiability
While the watermark detection process can be performed by anyone
, voluntarily or in delegation, who has access to the public watermark
and watermark key, the ownership is proven by further
checking the corresponding watermark certificate. This involves
checking (i) if the watermark certificate has been revoked (see the
next subsection for details); (ii) if the watermark key and (the hash
of) the watermark used in watermark detection are the same as
those listed in the watermark certificate; (iii) if the signature is correctly
signed by the DB-CA stipulated in the watermark certificate
(this is done in traditional PKI and may involve checking the DB-CA's
public key certificate, a chain of CA's certificates, and a certificate
revocation list); and (iv) the similarity of suspicious data
R
to the original data R as published by the owner of watermark
certificate. If all are proven, the ownership of the suspicious data
is publicly claimed to belong to the owner of the watermark certificate
for the time period stipulated in the certificate. The original
time that the data was certified is also indicated in the certificate.
The last requirement is optional, depending on whether data
frame-up attack
is of concern. In a data frame-up attack, an attacker
modifies the watermarked data as much as possible while
leaving the watermarked bits (i.e., MSBs of selected values) untouched
. Note that in our scheme, an attacker can pinpoint the watermarked
bits since the watermark key, watermark, and watermark
algorithm are all public. Since the ownership is publicly verifiable,
such "frame-up" data may cause confusion and damage to the legitimate
ownership.
The data frame-up attack has not been discussed before, even
though it is also possible in secret key based schemes. For example
, in Agrawal and Kiernan's watermarking scheme [1], the watermark
information is embedded in one of least significant bits
of some selected values. Data frame-up attack is possible if an attacker
modifies all significant bits except the last least significant
bits in each value. However, this attack is less serious in secret key
based schemes because the owner of watermarked data may choose
not to claim the ownership for "frame-up" data. In our scheme, this
attack is thwarted by requiring that the suspicious data is similar
enough to the original data (the authenticity of the original data R
can be checked with h(R) in the watermark certificate).
The rationale is that when an attacker forges a low quality data
R
with the MSBs given in the public watermark W , such R
will
be significantly different from the original R due to its low quality.
The similarity between R and R
may be measured, for example,
by the portion of significant bits that match for each pair of values
in R and R
whose watermarked MSBs match. The similarity may
also be measured in terms of the usefulness of data, such as the
difference of individual values, means, and variances.
3.3 Certificate Management
Once publicly proven based on a valid watermark certificate,
the ownership of watermarked data is established for the owner
of the certificate. The current ownership is valid for a time period
[T
start
, T
end
] stipulated in the certificate. The original time
T
origin
when the data was first certified is also indicated in the certificate
.
The use of original time is to thwart additive attack. Additive attack
is a common type of attacks to watermarking schemes in which
an attacker simply generates another watermark for watermarked
data so as to confuse ownership proof. The additional watermark
can be generated using a watermark key that is derived from the
attacker's ID. It is also possible for the attacker to obtain a valid
watermark certificate for this additional watermark.
We solve this problem by comparing the original time T
origin
in the certificate of real owner with the original time T
origin
in the
certificate of the attacker. We assume that the owner of data will
not make the data available to potential attackers unless the data is
watermarked and a valid watermark certificate is obtained. Therefore
, one always has T
origin
< T
origin
by which the legitimate
ownership can be proven in the case of an ownership dispute. After
this, the attacker's valid certificate should be officially revoked.
Besides revocation upon losing an ownership dispute, a certificate
may be revoked before its expiration based on the following
reasons: (1) identity change; (2) ownership change; (3) validity
period change; (4) DB-CA compromise; and (5) database update.
When the owner of a valid certificate changes his identity, he needs
to revoke the certificate and, at the same time, apply for a new
certificate to replace the old one. Upon the owner's request, the
DB-CA will grant a new validity period [T
start
, T
end
] according
to its policy while keeping the original time T
origin
unchanged in
the new certificate. The case of ownership change is handled in a
similar manner, except that the DB-CA needs to authenticate the
new owner and ensure the ownership change is granted by the old
owner. In both cases, a new watermark key and a new watermark
may be derived and included in the new certificate.
Sometimes the owner wants to prolong or shorten the validity period
of his certificate. In this case, the watermark certificate needs
to be re-certified with a new validity period. The watermark key or
watermark does not need to change in the recertification process.
In our scheme, the DB-CA is trusted, similar to the CA in traditional
PKI. A traditional PKI certificate would need to be revoked
for a variety of reasons, including key compromise and CA compromise
. Since a watermark key is not paired with a private key
in our scheme, there is no scenario of watermark key compromise.
However, there is a possibility of DB-CA compromise if any of the
following happens: (i) DB-CA's signature is no longer safe (e.g.,
due to advanced collision attacks); (ii) DB-CA loses its signature
key; (iii) DB-CA ceases its operation or business; or (iv) any CA
who certifies the DB-CA's public key is compromised (the public
key is used to verify the DB-CA's signature in our scheme). In
the case of DB-CA compromise, all related watermark certificates
must be revoked and re-examined by a valid DB-CA and recertified
with new validity periods but unchanged original times.
Due to the similarity between the watermark certificate and the
traditional identity certificate, many existing standards and mechanisms
regarding certificate management, such as certification path
constraints and CRL distribution points, can be borrowed from PKI
with appropriate adaptations. For simplicity and convenience, the
functionality of a DB-CA may be performed by a CA in traditional
PKI.
3.4 Efficient Revocation of Watermark Certificate
Micali proposed an efficient public key certificate revocation scheme
[23] called CRS (for certificate revocation status). Compared with
the CRL-based solution, CRS substantially reduces the cost of management
of certificates in traditional PKI. This scheme can easily
be adapted to our scheme for efficient revocation of watermark certificates
.
As pointed out in [23], the costs of running a PKI are staggering
and most of the costs are due to CRL transmission. The major
reason is that each time a user queries the status of a single certificate
, he needs to query a directory, an agent receiving certificate
information from a CA and handling user queries about it, and the
directory sends him the whole CRL list that has been most recently
signed by the CA. Since the CRL list tends to be very long and
transmitted very often, the CRL solution is extremely expensive. In
CRS, however, the directory responds to a user's query by sending
a 100-bit value only, instead of the whole CRL. The 100-bit value
is employed by the user to verify whether the relative certificate is
valid or has been revoked.
In our watermarking scheme, the DB-CA selects a secret 100-bit
value Y
0
for a watermark certificate, and recursively applies on it
a one-way function F 365 times, assuming that the validity period
of the certificate is a normal year. The DB-CA then includes the
100-bit value Y
365
= F
365
(Y
0
) in the watermark certificate C =
ID,K,h(W),h(R),T,DB-CA,Y
365
, Sig
.
Assume that the current day is the i-th day in the validity period
of the certificate. The DB-CA generates a 100-bit value Y
365
-i
=
F
365
-i
(Y
0
) and gets it published through the directory. It is the DB
owner's responsibility to obtain Y
365
-i
from the directory and publish
it together with the watermark certificate C. Anyone can verify
the validity of the certificate by checking whether F
i
(Y
365
-i
) =
Y
365
, where i is the number of days since the start of the validity
period (i.e., T
start
in T ). If this is the case, the certificate is valid;
otherwise, it has been revoked before the i-th day, in which case
the DB-CA did not get Y
365
-i
published. Note that Y
365
-i
cannot
be computed from previously released Y
365
-j
(j < i) due to the
one-way property of function F .
In this scheme, the DB owner needs to query the directory and
update Y
365
-i
every day. To make the transition from Y
365
-i
to
Y
364
-i
smooth, one more hour may be granted for the validity period
of Y
365
-i
(i.e., 25 hours). To avoid high query load at certain
hours, the validity period of Y
365
-i
should start at a different time
each day for a different certificate. A policy stating this may also
be included in the watermark certificate.
Note that Micali's original scheme requires a CA to (i) sign another
100-bit value besides Y
365
-i
to explicitly indicate a certificate
being revoked; and (ii) sign a updated list indicating all and
only the series numbers of issued and not-yet-expired certificates.
The signed value and list are sent to the directory so that any user
query can be answered by the directory. In our scheme, it is the DB
owner's responsibility (for his own benefit, namely anti-piracy) to
query the directory and publish the updated Y
365
-i
online together
with DB, watermark, and certificate. A user who wants to verify
the certificate will obtain the validity information from the owner
rather than from the directory. This separation of duty simplifies
the scheme and clarifies the responsibility of the DB owner.
It is relatively straightforward to analyze the communication cost
of our scheme as compared with the CRL based solution. The
analysis is very similar to that given in [23] for comparing CRS
with CRL (CRS is about 900 times cheaper than CRL in terms of
communication cost with the Federal PKI estimates). We omit this
analysis due to space limitations.
3.5 Incremental Updatability
The proposed scheme is also designed to facilitate incremental
database update. In relational database systems, database update
has been tailored to tuple operations (tuple insertion, deletion, and
modification), where each tuple is uniquely identified by its primary
key. In our scheme, both watermark generation and detection
are tuple oriented; each tuple is processed independently of other
tuples, based on its primary key.
The watermark is updated as follows. If a set of new tuples is
inserted into the watermarked data, the watermark generation algorithm
1 can be performed on those new tuples only. As a result, a
set of corresponding new tuples is generated and inserted into the
watermark. If a set of tuples is deleted from the watermarked data,
the corresponding tuples with the same primary keys are simply
deleted from the watermark. In the case that a set of values is modified
, only the related tuples need to be updated in the watermark.
This can be done in a similar manner as in the tuple insertion case.
Note that if a modified value does not contribute any MSB to the
watermark, then no update is needed for that value.
The update of the watermark certificate follows the update of the
watermark. To update a watermark certificate, the owner of watermarked
data needs to authenticate himself to a DB-CA, revoke the
old certificate, and get a new certificate for the updated DB and watermark
. The new certificate may have an updated validity period,
but the original time will not be altered. As this process involves
interactions with a DB-CA, it may not be efficient if executed frequently
. Fortunately, our scheme is very robust against database
update, as will be indicated in the next section. Therefore, the update
of the watermark and watermark certificate may lag behind the
update of the watermarked data; it can be done periodically after a
batch of data updates. The lag-behind watermark and certificate
can still be used for checking the ownership of the updated data as
long as the updates do not severely degrade the robustness of our
scheme.
3.6 Discussion
Like traditional PKI, the certificate revocation in our scheme is
handled only by the trusted party (i.e., the DB-CA). An alternative
solution is to let the DB owner himself handle the certificate
revocation. After the DB-CA signs a watermark certificate C =
ID,K,h(W),h(R),T,DB - CA,Y
365
, Sig
, where Y
365
=
F
365
(Y
0
), it gives Y
0
to the DB owner through a secure channel.
The DB owner keeps Y
0
secret. On the i-th day in the validity
period of the certificate, the DB owner himself can generate and
publish Y
365
-i
= F
365
-i
(Y
0
), based on which anyone can verify
the validity of the certificate. This solution further simplifies our
scheme in the sense that the DB-CA does not need to generate Y-values
for all valid certificates each day, and that all DB owners do
not need to query a directory to update the Y-values. The communication
cost is thus reduced substantially. Whenever the DB owner
deems it appropriate (e.g., after database is updated), he can refuse
to release new Y-values to the public, thus revoking the certificate
in a de facto manner, and apply a new certificate if necessary. This
solution works well for database updates because it is to the benefit
of the DB owner to maintain the certificate status. However, it
may not work well in the case of DB-CA compromise or loss of
Y
0
, but this fortunately would not happen very often as compared
with database updates. It is possible to develop a hybrid solution
that combines the merits of both DB-owner-handled revocation and
CA-handled revocation.
ROBUSTNESS AND OVERHEAD
For a watermarking scheme to be useful, it must be robust against
typical attacks and be efficient in practice. In this section, we first
present a quantitative model for the robustness of our watermarking
scheme. We analyze the robustness of our scheme by the same
method (i.e., binomial probability) as was used in [1]. We then investigate
the overhead of our watermarking scheme. We also study
the tradeoffs between the robustness and overhead in terms of the
watermarking generation parameter and watermarking detection
parameter .
4.1 Survival Binomial Probability
Consider n Bernoulli trials of an event, with probability p of
success and q = 1 - p of failure in any trial. Let P
p
(k; n) be
the probability of obtaining exactly k successes out of n Bernoulli
trials (i.e., the discrete probability of binomial distribution). Then
P
p
(k; n) =
nkp
k
q
n
-k
(2)
nk = n!
k!(n
- k)!
(3)
Let C
p
(k; n) denote the probability of having more than k successes
in n Bernoulli trials; that is, C
p
(k; n) is the survival binomial
probability. According to the standard analysis of binomial
distribution, we have
C
p
(k; n) =
n
i=k+1
P
p
(i; n)
(4)
In many widely available computation software packages such as
Matlab and Mathematica, the survival binomial probability can be
computed by C
p
(k; n) = 1
- binocdf(k,n,p), where binocdf(k,
n, p) is the binomial cumulative distribution function with parameters
n and p at value k. When n is large, the binomial distribution
can be approximated by a normal distribution with mean np,
standard deviation
npq, at value k + 0.5, where 0.5 is the correction
of continuity (for p = 0.5, the normal is a good approximation
when n is as low as 10; see chapter 7.6 in [31]). Thus,
C
p
(k; n) = 1
- normcdf(k + 0.5,np,npq), where normcdf
is the normal cumulative distribution function.
4.2 Detecting Non-Watermarked Data
First consider the robustness of our scheme in terms of false hit,
which is the probability of a valid watermark being detected from
non-watermarked data. The lower the false hit, the better the robustness
. We show that the false hit is under control in our scheme
and can be made highly improbable.
Recall that in watermark detection, a collection of MSBs are
located in suspicious data and compared with the corresponding
bits recorded in the public watermark. When the watermark detection
is applied to non-watermarked data, each MSB in data has the
same probability 1/2 to match or not to match the corresponding
bit in the watermark. Assume that the non-watermarked data has
the same number of tuples (and the same primary keys) as the
original data. Let = be the total number of bits in the watermark
, where is the watermark generation parameter. The false
hit is the probability that at least portion of bits can be detected
from the non-watermarked data by sheer chance, where is the
watermark detection parameter. The false hit H can be written as
H = C
1/2
(
,) = C
1/2
(
,)
(5)
1
2
3
4
5
6
7
8
9
10
10
-15
10
-10
10
-5
10
0
False hit H
=1000
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 2: False hit as function of
2000
4000
6000
8000
10000
10
-15
10
-10
10
-5
10
0
False hit H
=5
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 3: False hit as function of
Figure 2 shows the change of the false hit when the watermark
insertion parameter increases from 1 to 10 for fixed = 1000
and various values of the watermark detection parameter . The
figure illustrates that the false hit is monotonic decreasing with both
watermark insertion parameter and detection parameter . On
the one hand, the larger the insertion parameter , the more MSBs
are included in the watermark and the smaller the false hit. On
the other hand, the false hit can be decreased by increasing the
detection parameter , which is the least fraction of watermark bits
required for ownership assertion.
Figure 3 illustrates the trend of false hit when the number of
tuples is scaled up from 1000 to 10,000. The trend is that the false
hit is monotonic decreasing with . This trend is linear, which is
similar to that of increasing , as indicated in figure 2. A conclusion
drawn from these two figures is that with reasonably large values
of , , and/or , the false hit can be made extremely low.
4.3 Detecting Watermarked Data
We now consider the robustness of our scheme in terms of false
miss
, which is the probability of not detecting a valid watermark
from watermarked data that has been modified in typical attacks.
The robustness can also be measured in terms of the error introduced
by typical attacks. The less the false miss, or the larger the
error introduced by typical attacks, the better the robustness. The
typical attacks include database update, selective value modification
, and suppression. Other typical attacks include the data frame-up
attack and the additive attack which have been addressed in a
previous section.
4.3.1 Typical Database Update
Typical database update includes tuple insertion, tuple deletion,
attribute deletion, and value modification. For tuple deletion and
attribute deletion, the MSBs in the deleted tuples or attributes will
not be detected in watermark detection; however, the MSBs in other
tuples or attributes will not be affected. Therefore, all detected
MSBs will match their counterparts in the public watermark, and
the false miss is zero.
Though the deletion of tuples or attributes will not affect the false
miss
, it will make the false hit worse. The more the tuples or attributes
are deleted, the larger the false hit, as indicated in Section
4.2. The effect to the false hit of deleting tuples is equivalent to that
of decreasing as shown in Figure 3, while the effect of deleting
attributes is equivalent to decreasing proportionally as shown in
Figure 2.
Since the watermark detection is primary key based, a newly inserted
tuple should have a valid primary key value; otherwise, there
is no corresponding tuple in the public watermark. We thus consider
tuple insertion to be "mix-and-match" [1]; that is, an attacker
inserts new tuples to replace watermarked tuples with their primary
key values unchanged. For watermark detection to return a
false answer, at least - MSBs in those newly added tuples
(which consists of MSBs) must not match their counterparts
in the public watermark (which consist of bits). Therefore, the
false miss M
for inserting tuples in mix-and-match can be written
as
M
= C
1/2
(
- - 1,)
(6)
80 82 84 86 88 90 92 94 96 98 100
10
-15
10
-10
10
-5
10
0
/ (%)
False miss M
=5, =1000
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 4: False miss (tuple insertion) as function of
Figures 4, 5, and 6 show the false miss in the case of tuple insertion
. The default parameters in these figures are / = 90% (i.e.,
bits recorded in the public watermark. When the watermark detection
is applied to non-watermarked data, each MSB in data has the
same probability 1/2 to match or not to match the corresponding
bit in the watermark. Assume that the non-watermarked data has
the same number of tuples (and the same primary keys) as the
original data. Let = be the total number of bits in the watermark
, where is the watermark generation parameter. The false
hit is the probability that at least portion of bits can be detected
from the non-watermarked data by sheer chance, where is the
watermark detection parameter. The false hit H can be written as
H = C
1/2
(
,) = C
1/2
(
,)
(5)
1
2
3
4
5
6
7
8
9
10
10
-15
10
-10
10
-5
10
0
False hit H
=1000
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 2: False hit as function of
2000
4000
6000
8000
10000
10
-15
10
-10
10
-5
10
0
False hit H
=5
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 3: False hit as function of
Figure 2 shows the change of the false hit when the watermark
insertion parameter increases from 1 to 10 for fixed = 1000
and various values of the watermark detection parameter . The
figure illustrates that the false hit is monotonic decreasing with both
watermark insertion parameter and detection parameter . On
the one hand, the larger the insertion parameter , the more MSBs
are included in the watermark and the smaller the false hit. On
the other hand, the false hit can be decreased by increasing the
detection parameter , which is the least fraction of watermark bits
required for ownership assertion.
Figure 3 illustrates the trend of false hit when the number of
tuples is scaled up from 1000 to 10,000. The trend is that the false
hit is monotonic decreasing with . This trend is linear, which is
similar to that of increasing , as indicated in figure 2. A conclusion
drawn from these two figures is that with reasonably large values
of , , and/or , the false hit can be made extremely low.
4.3 Detecting Watermarked Data
We now consider the robustness of our scheme in terms of false
miss
, which is the probability of not detecting a valid watermark
from watermarked data that has been modified in typical attacks.
The robustness can also be measured in terms of the error introduced
by typical attacks. The less the false miss, or the larger the
error introduced by typical attacks, the better the robustness. The
typical attacks include database update, selective value modification
, and suppression. Other typical attacks include the data frame-up
attack and the additive attack which have been addressed in a
previous section.
4.3.1 Typical Database Update
Typical database update includes tuple insertion, tuple deletion,
attribute deletion, and value modification. For tuple deletion and
attribute deletion, the MSBs in the deleted tuples or attributes will
not be detected in watermark detection; however, the MSBs in other
tuples or attributes will not be affected. Therefore, all detected
MSBs will match their counterparts in the public watermark, and
the false miss is zero.
Though the deletion of tuples or attributes will not affect the false
miss
, it will make the false hit worse. The more the tuples or attributes
are deleted, the larger the false hit, as indicated in Section
4.2. The effect to the false hit of deleting tuples is equivalent to that
of decreasing as shown in Figure 3, while the effect of deleting
attributes is equivalent to decreasing proportionally as shown in
Figure 2.
Since the watermark detection is primary key based, a newly inserted
tuple should have a valid primary key value; otherwise, there
is no corresponding tuple in the public watermark. We thus consider
tuple insertion to be "mix-and-match" [1]; that is, an attacker
inserts new tuples to replace watermarked tuples with their primary
key values unchanged. For watermark detection to return a
false answer, at least - MSBs in those newly added tuples
(which consists of MSBs) must not match their counterparts
in the public watermark (which consist of bits). Therefore, the
false miss M
for inserting tuples in mix-and-match can be written
as
M
= C
1/2
(
- - 1,)
(6)
80 82 84 86 88 90 92 94 96 98 100
10
-15
10
-10
10
-5
10
0
/ (%)
False miss M
=5, =1000
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 4: False miss (tuple insertion) as function of
Figures 4, 5, and 6 show the false miss in the case of tuple insertion
. The default parameters in these figures are / = 90% (i.e.,
1
2
3
4
5
6
7
8
9
10
10
-15
10
-10
10
-5
10
0
False miss M
=1000, / =90%
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 5: False miss (tuple insertion) as function of
2000
4000
6000
8000
10000
10
-15
10
-10
10
-5
10
0
False miss M
=5, / =90%
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 6: False miss (tuple insertion) as function of
90% of the new tuples are inserted into the data to replace the watermarked
tuples), = 5, and = 1000. A general trend shown
in these figures is that the false miss is monotonic increasing with
watermark detection parameter . This trend is opposite to that of
the false hit, which is monotonic decreasing with as indicated in
Figures 2 and 3. Therefore, there is a tradeoff between false hit and
false miss with respect to .
Figure 4 shows that even if 80% of watermarked tuples are replaced
with new tuples, the false miss is as low as 10
-15
for all
values greater than or equal to 51%. The false miss is close to one
only if more than 90% of watermarked tuples are replaced in this
figure.
Figures 5 and 6 illustrate that the false miss is monotonic decreasing
with and , which is similar to the trend of false hit as
indicated in Figures 2 and 3. With reasonably large and/or , the
false miss can be made extremely low.
For value modification, we assume that the modified values are
randomly chosen. We leave the selective modification targeted on
watermarked values to the next subsection. Recall that there are
attributes in the original data in which attributes are watermarked
for each tuple. When a random modification happens, it has probability
/ that a watermarked value is chosen. When a watermarked
value is modified, its MSB has probability 1/2 to change
(i.e., the value is modified randomly). In watermark detection, a detected
MSB has probability /(2) not to match its counterpart in
the public watermark. The false miss M
for randomly modifying
values can be written as
M
= C
/2
(
- - 1,)
(7)
80 82 84 86 88 90 92 94 96 98 100
10
-15
10
-10
10
-5
10
0
/() (%)
False miss M
=10, =5, =1000
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 7: False miss (value modification) as function of
1
2
3
4
5
6
7
8
9
10
10
-15
10
-10
10
-5
10
0
False miss M
=10, =1000, /() =90%
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 8: False miss (value modification) as function of
2000
4000
6000
8000
10000
10
-15
10
-10
10
-5
10
0
False miss M
=10, =5, /() =90%
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 9: False miss (value modification) as function of
Figures 7, 8, and 9 show the false miss in the case of random
value modification. The default parameters in these figures are
/() = 90% (i.e., 90% of the values are modified randomly),
= 10, = 5, and = 1000. The general trend shown in these
figures for value modification is similar to that shown in previous
Figures 4, 5, and 6 for tuple insertion. The difference in calculation
is due to the use of probability /2 in Equation 7 instead of probability
1/2 in Equation 6. Figure 7 shows that even if 80% of values
are modified randomly, which would make the data less useful, the
false miss rate in detection is less than 10
-10
in our computation.
4.3.2 Selective Value Modification and Suppression
Since both the watermark key and the watermark are public in
our scheme, an attacker can pinpoint the MSBs of watermarked
values. A simple attack would be to flip some of those MSBs so
that the watermark detection will detect no match. Assuming that
watermarked MSBs are flipped in selective value modification, the
false miss M
can be written as
M
=
1 if-0
otherwise
(8)
If no less than - watermarked MSBs are flipped, the watermarked
data will no longer be detected. The robustness of our
scheme can then be measured in terms of the error introduced by
this attack. The larger the error introduced for defeating the watermark
detection (i.e., achieving M
= 1), the better the robustness.
Recall that any change to an MSB would introduce intolerable
error to the related data value. To defeat the watermark detection,
no less than - MSBs have to be flipped; this would introduce
intolerable errors to no less than - data values. We
thus measure the robustness in terms of failure error rate, which is
the least fraction F of total data values that need to be intolerably
modified for defeating the watermark detection. This failure error
rate can be written as
F =
(1 - )
(9)
A larger failure error rate (or better robustness) can be achieved
by increasing (watermark generation parameter) or decreasing
(watermark detection parameter). There is a tradeoff between
the robustness of our scheme and the size of the public watermark
(which has binary attributes). To achieve the best robustness
in terms of thwarting the selective modification attacks, one may
choose = and 0.5. (However, this would increase the
false hit as indicated in Section 4.2.) In this extreme case, approxi-mately
50% of data values have to be intolerably modified so as to
defeat the watermark detection.
To avoid the intolerable error, an attacker may choose to suppress
some watermarked values rather than flipping their MSBs. Since
this attack causes no mismatch in watermark detection, the false
miss
is zero. However, it will increase the false hit because those
MSBs will be missed in watermark detection. It is easy to know that
the effect of suppressing MSBs to the false hit is the equivalent
of decreasing the total number of MSBs by in the computation
of false hit. Thus, the false hit formula (see section 4.2) changes
from C
1/2
(
,) to C
1/2
(
( - ), - ) for selective
suppression of watermarked values.
Figure 10 shows the influence of selective value suppression to
the false hit for fixed = 5, = 1000, and various from 0.51 to
0.55. In the figure, we change the rate /() (the percentage of
watermarked bits are suppressed) from 0% to 99%. Even if the rate
/() increases up to 50%, the false hit is still below 15.4% for
= 0.51, below 2.2% for = 0.52, below 0.13% for = 0.53,
below 3 10
-5
for = 0.54, and below 2.6 10
-7
for = 0.55.
4.4 Overhead
We now analyze the time and space overhead for both watermark
0 10 20 30 40 50 60 70 80 90 100
10
-15
10
-10
10
-5
10
0
/() (%)
False hit H
=5, =1000
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 10: False hit (value suppression) as function
of
generation and watermark detection. Throughout the analysis, we
ignore the IO cost (i.e., reading and writing tuples). Table 2 describes
the symbols that will be used in this section.
Consider watermark generation. For each of tuples to be processed
, a random sequence generator G is first seeded, then MSBs
are determined based on random numbers generated by G. The
MSBs are assigned to the corresponding attributes in the public
watermark. For each MSB to be determined, one mod operation is
involved and one attribute is deleted from the copy of related tuple.
The memory requirement for the process of a tuple is to keep the
copy of the tuple, MSBs, and the watermark key in concatenation
with the tuple's primary key. Therefore, the time overhead t
genW
and space overhead m
genW
for watermark generation are
t
genW
=
t
seed
+ (t
genS
+ t
mod
+ t
bit
+ t
delA
)
=
O()
(10)
m
genW
=
m
tuple
+ + m
wkey
= O()
(11)
In watermark detection, the time and space overheads are the
same as in watermark generation except for the cost of processing
the count information. Let t
if
denote the cost of the last operation
"if match count/total count > ." The time overhead t
detW
and space overhead m
detW
for watermark detection can be written
as
t
detW
=
2t
count
+ t
seed
+ (t
genS
+ t
mod
+ t
bit
+
t
delA
+ 2t
count
) + t
if
= O()
(12)
m
detW
=
2m
count
+ m
tuple
+ + m
wkey
= O()
(13)
The generated watermark W will be stored on disk. The disk
storage requirement m
disk
is thus
m
disk
=
|W| = m
pkey
+ = O()
(14)
4.5 Tradeoffs
In our watermark scheme, we have two parameters: watermark
generation parameter and watermark detection parameter . The
two parameters can be used to balance between the robustness and
the overhead of our scheme. Table 3 summarizes the tradeoffs that
can be made when choosing the two parameters.
The watermark generation parameter is used to balance between
robustness and overhead. The larger the , the better the
robustness of our scheme and the worse the time and space overhead
. While the watermark detection parameter has no effect on
figures for value modification is similar to that shown in previous
Figures 4, 5, and 6 for tuple insertion. The difference in calculation
is due to the use of probability /2 in Equation 7 instead of probability
1/2 in Equation 6. Figure 7 shows that even if 80% of values
are modified randomly, which would make the data less useful, the
false miss rate in detection is less than 10
-10
in our computation.
4.3.2 Selective Value Modification and Suppression
Since both the watermark key and the watermark are public in
our scheme, an attacker can pinpoint the MSBs of watermarked
values. A simple attack would be to flip some of those MSBs so
that the watermark detection will detect no match. Assuming that
watermarked MSBs are flipped in selective value modification, the
false miss M
can be written as
M
=
1 if-0
otherwise
(8)
If no less than - watermarked MSBs are flipped, the watermarked
data will no longer be detected. The robustness of our
scheme can then be measured in terms of the error introduced by
this attack. The larger the error introduced for defeating the watermark
detection (i.e., achieving M
= 1), the better the robustness.
Recall that any change to an MSB would introduce intolerable
error to the related data value. To defeat the watermark detection,
no less than - MSBs have to be flipped; this would introduce
intolerable errors to no less than - data values. We
thus measure the robustness in terms of failure error rate, which is
the least fraction F of total data values that need to be intolerably
modified for defeating the watermark detection. This failure error
rate can be written as
F =
(1 - )
(9)
A larger failure error rate (or better robustness) can be achieved
by increasing (watermark generation parameter) or decreasing
(watermark detection parameter). There is a tradeoff between
the robustness of our scheme and the size of the public watermark
(which has binary attributes). To achieve the best robustness
in terms of thwarting the selective modification attacks, one may
choose = and 0.5. (However, this would increase the
false hit as indicated in Section 4.2.) In this extreme case, approxi-mately
50% of data values have to be intolerably modified so as to
defeat the watermark detection.
To avoid the intolerable error, an attacker may choose to suppress
some watermarked values rather than flipping their MSBs. Since
this attack causes no mismatch in watermark detection, the false
miss
is zero. However, it will increase the false hit because those
MSBs will be missed in watermark detection. It is easy to know that
the effect of suppressing MSBs to the false hit is the equivalent
of decreasing the total number of MSBs by in the computation
of false hit. Thus, the false hit formula (see section 4.2) changes
from C
1/2
(
,) to C
1/2
(
( - ), - ) for selective
suppression of watermarked values.
Figure 10 shows the influence of selective value suppression to
the false hit for fixed = 5, = 1000, and various from 0.51 to
0.55. In the figure, we change the rate /() (the percentage of
watermarked bits are suppressed) from 0% to 99%. Even if the rate
/() increases up to 50%, the false hit is still below 15.4% for
= 0.51, below 2.2% for = 0.52, below 0.13% for = 0.53,
below 3 10
-5
for = 0.54, and below 2.6 10
-7
for = 0.55.
4.4 Overhead
We now analyze the time and space overhead for both watermark
0 10 20 30 40 50 60 70 80 90 100
10
-15
10
-10
10
-5
10
0
/() (%)
False hit H
=5, =1000
=0.51
=0.52
=0.53
=0.54
=0.55
Figure 10: False hit (value suppression) as function
of
generation and watermark detection. Throughout the analysis, we
ignore the IO cost (i.e., reading and writing tuples). Table 2 describes
the symbols that will be used in this section.
Consider watermark generation. For each of tuples to be processed
, a random sequence generator G is first seeded, then MSBs
are determined based on random numbers generated by G. The
MSBs are assigned to the corresponding attributes in the public
watermark. For each MSB to be determined, one mod operation is
involved and one attribute is deleted from the copy of related tuple.
The memory requirement for the process of a tuple is to keep the
copy of the tuple, MSBs, and the watermark key in concatenation
with the tuple's primary key. Therefore, the time overhead t
genW
and space overhead m
genW
for watermark generation are
t
genW
=
t
seed
+ (t
genS
+ t
mod
+ t
bit
+ t
delA
)
=
O()
(10)
m
genW
=
m
tuple
+ + m
wkey
= O()
(11)
In watermark detection, the time and space overheads are the
same as in watermark generation except for the cost of processing
the count information. Let t
if
denote the cost of the last operation
"if match count/total count > ." The time overhead t
detW
and space overhead m
detW
for watermark detection can be written
as
t
detW
=
2t
count
+ t
seed
+ (t
genS
+ t
mod
+ t
bit
+
t
delA
+ 2t
count
) + t
if
= O()
(12)
m
detW
=
2m
count
+ m
tuple
+ + m
wkey
= O()
(13)
The generated watermark W will be stored on disk. The disk
storage requirement m
disk
is thus
m
disk
=
|W| = m
pkey
+ = O()
(14)
4.5 Tradeoffs
In our watermark scheme, we have two parameters: watermark
generation parameter and watermark detection parameter . The
two parameters can be used to balance between the robustness and
the overhead of our scheme. Table 3 summarizes the tradeoffs that
can be made when choosing the two parameters.
The watermark generation parameter is used to balance between
robustness and overhead. The larger the , the better the
robustness of our scheme and the worse the time and space overhead
. While the watermark detection parameter has no effect on
Table 2: Symbols used in the analysis of overhead
t
seed
cost of seeding random sequence generator S with public key and a tuple's primary key
t
genS
cost of generating a random number from S
t
mod
cost of mod operation
t
delA
cost of deleting an attribute from a copy of a tuple
t
bit
cost of assigning/comparing a bit value to/with the public watermark
t
count
cost of assigning/updating a count in watermark detection
m
count
number of bits required to store a count in watermark detection
m
tuple
number of bits required to store a copy of a tuple
m
wkey
number of bits to store a watermark key
m
pkey
number of bits to store a primary key value
Table 3: Tradeoffs
para-false
false
failure
robustness
overhead
overhead
meter
hit
miss
error rate
(summary)
(time)
(space)
H M
F
H M
F
in terms of H
-in
terms of M,F
the overhead, it is used as a tradeoff between false hit, false miss,
and failure error rate. Increasing will make the robustness better
in terms of false hit, but worse in terms of false miss and failure
error rate.
RELATED WORK
Watermarking has been extensively studied in the context of multimedia
data for the purpose of ownership protection and authentication
[7, 17, 18]. Most watermarking schemes proposed so far are
secret key based
, which require complete disclosure of the watermarking
key in watermark verification. These watermarking schemes
can be further classified as private (both the secret key and original
data are required in watermark verification), blind (only the secret
key is needed for watermark bit decoding), and semi-blind (it
requires both the secret key and watermark bit sequence in watermark
detection). Watermarking schemes can also be classified as
being robust (the watermark is hardly destroyed in attacks), or fragile
(the watermark is hardly untouched if the watermarked data is
modified). The robust watermark may be used for ownership proof
while the fragile watermark is suitable for data authentication and
integrity check.
As database piracy increasingly becomes a serious problem, watermarking
techniques have been extended to protect the ownership
of published or distributed databases [1, 13, 28, 29, 26, 19, 20,
2]. Agrawal and Kiernan [1] first proposed a robust watermarking
scheme for database relations. Their scheme modifies a collection
of least significant bits of numerical attributes. The locations of
those least significant bits, and the values to which those bits are
modified, are all determined by a secret key. With the same secret
key, those modified values can be localized in watermark detection,
and ownership is claimed if a large portion of the detected values
are as expected.
As noted by Agrawal and Kiernan [1], database relations differ
from multimedia data in significant ways and hence require a
different class of watermarking techniques. A major difference is
that a database relation is composed of a set of tuples; each tuple
represents an independent object which can be added, deleted, and
modified frequently in either benign updates or malicious attacks.
In contrast, a multimedia object consists of a large number of bits;
portions of a multimedia object are bound together in fixed spatial
or temporal order that cannot be arbitrarily changed. It is also noted
that the frequency domain watermarking being used in the multimedia
context is not suitable for watermarking relational data. The
reason is that the error introduced in frequency domain will spread
over all attribute values (i.e., the whole "image"), which may not
be acceptable in certain database applications.
There have been other schemes proposed for watermarking relational
data. In Sion et al.'s scheme [28], an arbitrary bit is embedded
into a selected subset of numeric values by changing the distribution
of the values. The selection of the values is based on a secret
sorting. In another work, Gross-Amblard [13] designs a query-preserving
scheme which guarantees that special queries (called local
queries) can be answered up to an acceptable distortion. Recent
work also includes watermarking categorical data [26], streaming
data [29], XML data [27], and medical databases [2]. The watermarking
schemes for categorical data [26, 2] exchange pairs of
categorical values so as to embed watermark information. In this
case, there is no insignificant change and the error constraint is considered
at aggregation level (e.g., k-anonymity).
A common feature of this class of work is that a watermark is
embedded and detected based on a secret key. Without knowing
the key, an attacker is not able to locate exactly where the watermark
is embedded, nor does he destroy the embedded watermark
unless too many errors are introduced. A drawback of such a solution
is that the ownership of watermarked data can be proven only
once. After the key is revealed to the public (e.g., to the court) in
the proof, anyone knowing the key can easily locate and remove the
embedded watermark. Another common feature of these schemes
is that the watermarking process introduces errors to the underlying
data. This may severely affect database applications unless error
constraints are carefully enforced in the watermarking process. In
addition, a tradeoff between the watermarking error and the robustness
of watermarking schemes has to be made.
The concept of public key based watermark (or asymmetric watermark
) was first conceived in the multimedia context. Hachez and
Quisquater summarized the work in this area in [14]. As mentioned
in [14], one of the first ideas was proposed by Hartung and Girod
[15] for watermarking compressed video. The basic idea is to make
a part of the embedded watermark public such that a user can check
the presence of this part of watermark. However, an attacker is able
to remove this part of watermark and thus invalidate a public detector
. Another idea is to embed private key information into a host
signal and detect a correlation between the signal and a transformation
of the signal using a public key [33]. Other correlation-based
public watermarking schemes include [9, 30, 11]. However, such
watermarks can be removed by certain attacks such as a sensitivity
attack [6, 21] or confusing attack [34].
Craver and Katzenbeisser [8] used a zero knowledge protocol to
prove the presence of a watermark in a signal "without revealing the
exact location and nature of the watermark (specified by a private
key)." As in most zero knowledge protocols, the proposed scheme
requires many rounds of interactions between prover and verifier,
which may not be efficient in practice. It is also not clear how to extend
this scheme to watermarking relational databases. Because the
original watermark is not certified and because a verifier is allowed
to perform the protocol multiple times, this scheme may be subject
to oracle attack (an attacker uses a public detector repeatedly to test
modified signals so as to remove the watermark), plain-text chosen
attack (a special case of oracle attack in which the tested signals
are chosen by an attacker), or ambiguity attack (also called invert-ibility
attack, in which a fake watermark is discovered from the
watermarked signal). In comparison, our scheme requires no interaction
between a verifier and the owner of data, thus is immune to
both oracle attack and plain-text chosen attack. The watermark is
certified in our scheme for thwarting the ambiguity attack (which
we call additive attack in this paper). In addition, our scheme is
both efficient and robust for typical database operations.
CONCLUSION
In this paper, we proposed a public watermarking scheme for relational
databases. The scheme is unique in that it has the following
properties.
Public verifiability Given a database relation to be published
or distributed, the owner of data uses a public watermark
key to generate a public watermark, which is a relation
with binary attributes. Anyone can use the watermark
key and the watermark to check whether a suspicious copy
of data is watermarked, and, if so, prove the ownership of
the data by checking a watermark certificate officially signed
by a trusted certificate authority, DB-CA. The watermark
certificate contains the owner's ID, the watermark key, the
hashes of both the watermark and DB relation, the first time
the relation was certified, the validity period of the current
certificate, and the DB-CA's signature. The watermark certificate
may be revoked and re-certified in the case of identity
change, ownership change, DB-CA compromise, or data
update. Therefore, the revocation status also needs to be
checked in ownership proof. To our best knowledge, our
scheme is the only one to achieve public ownership proof
in database literature. In contrast, all existing schemes are
based on secret key, by which ownership cannot be proven
more than once in public.
Distortion free Different from typical watermarking schemes
(e.g., [1]) for database ownership proof that hide watermark
information in data by modifying least significant bits (LSBs),
our scheme generates a public watermark from a collection
of the most significant bits (MSBs). Our scheme does not
modify any MSBs; therefore, it is distortion-free. The public
watermark is a database relation that has the same primary
key attribute as the original data, plus one or more binary
attributes to store the MSBs. Even though the MSBs are
publicly known, an attacker cannot modify them without introducing
intolerable error to the underlying data. In comparison
, all previous watermarking schemes for databases
introduce some kind of distortion to the watermarked data.
They either modify LSB's for numerical data (e.g., [1, 19,
20]), or exchange values among categorical data (e.g., [26,
2]). Those schemes work well for particular types of data
only, while our scheme can be applied for any type of data
distortion-free.
Incremental updatability Following the line of [1], each
tuple in a database relation is independently processed in
our scheme. Neither watermark generation nor detection depends
on any correlation or costly sorting among data items
as required in [28, 26, 2]. Therefore, the scheme is particularly
efficient for typical database operations, which are
mostly tuple oriented. In the case of tuple insertion, deletion,
or modification, the watermark can be easily updated by processing
those relating tuples only, with simple computation
of random sequence numbers and modulus operations. Due
to the robustness of our scheme, the update of watermark
certificate can be performed periodically after a batch of data
updates.
Robustness Since the ownership of data is proven after the
data is published or distributed, it is crucial that our scheme
is robust against various attacks that intend to invalidate watermark
detection or ownership proof. The robustness of our
scheme is measured in terms of: (i) false hit, the probability
of detecting a valid watermark from non-watermarked
data; (ii) false miss, the probability of not detecting a valid
watermark from watermarked data due to attacks; and (iii)
failure error rate, the least portion of data that has to be intolerably
modified so as to defeat our watermark detection.
Typical database attacks considered in this paper include tuple/attribute
insertion, deletion, and random/selecitive value
modification/suppression. Both theoretical analysis and experimental
study show that our scheme is robust in terms
of these measures, which can be adjusted by the watermark
generation and detection parameters. We have also studied
the tradeoff between the robustness and the overhead of our
scheme. Our scheme is robust against the data frame-up attack
and additive attack that may be more perilous to public
watermarking schemes.
The major contribution of this paper is the proposal of a public
watermarking scheme that has the above properties. Though our
scheme may not necessarily supersede secret key based schemes
due to the overhead of using certificate and public watermark, we
believe that it can be applied more practically in the real world for
database ownership protection. Our future plan includes extending
our scheme to other types of data such as XML and streaming data.
REFERENCES
[1] R. Agrawal and J. Kiernan. Watermarking relational
databases. In Proceedings of VLDB, pages 155166, 2002.
[2] E. Bertino, B. C. Ooi, Y. Yang, and R. Deng. Privacy and
ownership preserving of outsourced medical data. In
Proceedings of IEEE International Conference on Data
Engineering
, pages 521532, 2005.
[3] D. Boneh and M. Franklin. Identity-based encryption from
the weil pairing. In Proceedings of CRYPTO'2001, LNCS
2139, Springer-Varlag
, pages 213229, 2001.
[4] Coalition Against Database Piracy (CADP). Piracy is
unacceptable in the information age or any other age, July 2,
2005. http://cadp.net/default.asp.
[5] C. Cocks. An identity based encryption scheme based on
quadratic residues. In Cryptography and Coding - Institute of
Mathematics and Its Applications International Conference
on Cryp- tography and Coding Proceedings of IMA 2001,
LNCS 2260
, pages 360363, 2001.
[6] I. J. Cox and J. M. G. Linnartz. Public watermarks and
resistance to tampering. In Proceedings of International
Conference on Image Processing
, pages 36, 1997.
[7] I. J. Cox, M. L. Miller, and J. A. Bloom. Digital
Watermarking: Principles and Practice
. Morgan Kaufmann,
2001.
[8] S. Craver and S. Katzenbeisser. Security analysis of
public-key watermarking schemes. In SPIE Vol. 4475,
Mathematics of Data/Image Coding, Compression, and
Encryption IV
, pages 172182, 2001.
[9] J. J. Eggers, J. K. Su, and B. Girod. Public key watermarking
by eigenvectors of linear transforms. In Proceedings of
European Signal Processing Conference (EUSIPCO)
, 2000.
[10] S. Farrell and R. Housley. An internet attribute certificate
profile for authorization, internet draft, April, 2002.
http://www.ietf.org/rfc/rfc3281.txt.
[11] T. Furon, I. Venturini, and P. Duhamel. A unified approach of
asymmetric watermarking schemes. In SPIE Vol. 4314,
Security and Watermarking of Multimedia Contents III
,
pages 269279, 2001.
[12] B. Gray and J. Gorelick. Database piracy plague. The
Washington Times
, March 1, 2004.
http://www.washingtontimes.com.
[13] D. Gross-Amblard. Query-preserving watermarking of
relational databases and xml documents. In Proceedings of
ACM Symposium on Principles of Database Systems
(PODS)
, pages 191201, 2003.
[14] G. Hachez and J. Quisquater. Which directions for
asymmetric watermarking. In Proceedings of XI European
Signal Processing Conference (EUSIPCO), Vol. I
, pages
283286, 2002.
[15] F. Hartung and B. Girod. Fast public-key watermarking of
compressed video. In Proceedings of IEEE International
Conference on Speech and Signal Processing
, 1997.
[16] R. Housley, W. Ford, W. Polk, and D. Solo. Internet x.509
public key infrastructure certificate and crl profile, July 2,
2005. http://www.ietf.org/rfc/rfc2459.txt.
[17] N. F. Johnson, Z. Duric, and S. Jajodia. Information Hiding:
Steganography and WatermarkingAttacks and
Countermeasures
. Kluwer Publishers, 2000.
[18] S. Katzenbeisser and F. A. Petitcolas, editors. Information
Hiding Techniques for Steganography and Digital
Watermarking
. Artech House, 2000.
[19] Y. Li, V. Swarup, and S. Jajodia. Constructing a virtual
primary key for fingerprinting relational data. In Proceedings
of ACM Workshop on Digital Rights Management (DRM)
,
October 2003.
[20] Y. Li, V. Swarup, and S. Jajodia. Fingerprinting relational
databases: Schemes and specialties. IEEE Transactions on
Dependable and Secure Computing (TDSC)
, 2(1):3445,
2005.
[21] J. M. G. Linnartz and M. van Dijk. Analysis of the sensitivity
attack against electronic watermarks in images. In
Proceedings of 2nd Workshop on Information Hiding
Workshop
, 1998.
[22] A. Menezes, P. C. van Oorschot, and S. A. Vanstone.
Handbook of Applied Cryptography
. CRC Press, 1997.
[23] S. Micali. Efficient certificate revocation. In Technical
Report: TM-542b.
Massachusetts Institute of Technology.
Cambridge, MA, USA, 1996.
[24] B. Schneier. Applied Cryptography. John Wiley & Sons,
Inc., 1996.
[25] A. Shamir. Identity-based cryptosystems and signature
schemes. In Proceedings of CRYPTO'84, LNCS 196,
Springer-Varlag
, pages 4753, 1984.
[26] R. Sion. Proving ownership over categorical data. In
Proceedings of IEEE International Conference on Data
Engineering
, pages 584596, 2004.
[27] R. Sion, M. Atallah, and S. Prabhakar. Resilient information
hiding for abstract semi-structures. In Proceedings of the
Workshop on Digital Watermarking
, 2003.
[28] R. Sion, M. Atallah, and S. Prabhakar. Rights protection for
relational data. In Proceedings of ACM SIGMOD
International Conference on Management of Data
, pages
98108, 2003.
[29] R. Sion, M. Atallah, and S. Prabhakar. Resilient rights
protection for sensor streams. In Proceedings of the Very
Large Databases Conference
, pages 732743, 2004.
[30] J. Smith and C. Dodge. Developments in steganography. In
Proceedings of 3rd International Workshop on Information
Hiding
, pages 7787, 1999.
[31] G. W. Snedecor and W. G. Cochran. Statistical Methods. 8th
edition, Iowa State Press, 1989.
[32] L. Vaas. Putting a stop to database piracy. eWEEK,
enterprise news and reviews
, September 24, 2003.
http://www.eweek.com/print article/0,3048,a=107965,00.asp.
[33] R. G. van Schyndel, A. Z. Tirkel, and I. D. Svalbe. Key
independent watermark detection. In Proceedings of IEEE
International Conference on Multimedia Computing and
Systems, Vol. 1
, 1999.
[34] Y. Wu, F. Bao, and C. Xu. On the security of two public key
watermarking schemes. In Proceedings of 4th IEEE
Pacific-Rim Conference on Multimedia
, 2003.
| public verifiability;certificate;Relational database;watermark;ownership protection |
157 | Putting Integrated Information in Context: Superimposing Conceptual Models with SPARCE | A person working with diverse information sources--with possibly different formats and information models--may recognize and wish to express conceptual structures that are not explicitly present in those sources. Rather than replicate the portions of interest and recast them into a single, combined data source, we leave base information where it is and superimpose a conceptual model that is appropriate to the task at hand. This superimposed model can be distinct from the model(s) employed by the sources in the base layer. An application that superimposes a new conceptual model over diverse sources, with varying capabilities, needs to accommodate the various types of information and differing access protocols for the base information sources. The Superimposed Pluggable Architecture for Contexts and Excerpts (SPARCE) defines a collection of architectural abstractions, placed between superimposed and base applications, to demarcate and revisit information elements inside base sources and provide access to content and context for elements inside these sources. SPARCE accommodates new base information types without altering existing superimposed applications. In this paper, we briefly introduce several superimposed applications that we have built, and describe the conceptual model each superimposes. We then focus on the use of context in superimposed applications. We describe how SPARCE supports context and excerpts. We demonstrate how SPARCE facilitates building superimposed applications by describing its use in building our two, quite diverse applications. | Introduction
When a physician prepares for rounds in a hospital intensive
care unit, she often creates a quick synopsis of important
problems, with relevant lab tests or observations,
for each patient, as shown in Figure 1. The information
is largely copied from elsewhere, e.g., from the patient
medical record, or the laboratory system. Although the
underlying data sources use various information
structures, including dictated free text, tabular results and
formatted reports, the physician may organize the
selected information items into the simple cells or groups
as shown in Figure 1 (without concern for the format or
information model of the base sources). Each row
contains information about a single patient, with the four
columns containing patient identifying information, (a
subset of) the patient's current problems, (a subset of)
recent lab results or other reports, and notes (including a
"To Do" list for the patient). While the information
elements selected for this synopsis will generally suffice
for the task at hand (patient rounds), the physician may
need to view an element (such as a problem or a lab
result) in the original source [Gorman 2000, Ash 2001].
However, this paper artefact obviously provides no means
of automatically returning to the original context of an
information element.
In an ICU, we have observed a clinician actively working
with a potentially diverse set of underlying information
sources as she prepares to visit a patient, selecting bits of
information from the various information sources, organizing
them to suit the current purpose, possibly
elaborating them with highlighting or annotation, or mixing
them with new additional information, including new
relationships among bits of information [Gorman 2000].
In our work [Delcambre 2001], we have put forth the
notion of superimposed information for use in such scenarios
. The superimposed layer contains marks, which
are encapsulated addresses, to the information elements
of interest in the base layer. More than that, the superimposed
layer may contain additional information (beyond
marks) and may be structured according to an appropriate
conceptual model. We are particularly interested in
viewing and manipulating base information using tools
appropriate for the information source (e.g., Microsoft
Word for .doc files, Adobe Acrobat for .PDF files, and an
electronic medical record system for patient data). We
have built several superimposed applications that use
conceptual models that are quite different from those of
any of the underlying base information sources.
In past work we have implemented superimposed
applications and models that rely solely on the ability of a
base application to create a mark and to return to the
marked region. In this paper, we explore the use of
excerpts and context for marks in superimposed applica-71
tions. An excerpt consists of the extracted content for a
mark and the context contains additional descriptive information
(such as section heading and font
characteristics) about the marked information.
In Section 2 we present two superimposed applications
that superimpose a new conceptual model over the base
information (which is largely text documents), and makes
use of excerpt and mark capabilities. In Section 3 we
describe the notion of excerpts and contexts in more
detail and provide the rationale for using middleware to
access them. The main contribution of this paper is our
architecture for building superimposed applications called
the Superimposed Pluggable Architecture for Contexts
and Excerpts (SPARCE), presented in Section 4. This
architecture makes it easy for a developer to build
superimposed applications, including those that
superimpose a conceptual model that is different from
any of the base conceptual models. The paper concludes
with a discussion of how to structure and access context,
a summary of related work, and conclusions and plans for
future work, in Sections 5, 6, and 7, respectively.
Sample Applications
We present two superimposed applications built using
SPARCE to demonstrate the ability to superimpose
different conceptual models, over the same corpus of base
information. These applications are designed for use in
the Appeals Decision Process in the Forest Services of
the US Department of Agriculture (USFS).
USFS routinely makes decisions to solve (or prevent)
problems concerning forests. The public may appeal any
USFS decision after it is announced. The appeal process
begins with a set period of time during which an appellant
can send in an appeal letter that raises one or more issue
with a USFS decision or the decision-making process. A
USFS editor processes all appeal letters pertaining to a
decision and prepares an appeal packet for a reviewing
officer. An appeal packet contains all documents a
reviewing officer might need to consult while formulating
a recommended decision about the complete set of issues
raised in the appeals. This set of documents is called the
Records, Information, and Documentation (RID) section
of the appeal packet. This section contains a RID letter
that lists the issues raised and a summary response for
each issue. An Editor synthesizes a RID letter using
documents in the RID such as the Decision Notice, the
Environmental Assessment, the Finding of No Significant
Impact (FONSI), and specialists' reports. In the RID
letter, the editor presents information from other
documents in a variety of forms such as excerpts,
summaries, and commentaries. In addition, the editor
documents the location and identity of the information
sources referenced in the RID letter.
2.1 RIDPad
Composing a RID letter requires an editor to maintain a
large working set of information. Since it is not unusual
for an editor to be charged with preparing appeal packets
for several decisions simultaneously, the editor may need
to maintain several threads of organization. Though using
documents in electronic form can be helpful, such use
does not necessarily alleviate all problems. For example,
the editor still needs to document the identity and location
of information. In using electronic documents, the editor
may have to cope with more than a dozen documents
simultaneously.
RIDPad is a superimposed application for the USFS appeal
process. A USFS editor can use this application to
collect and organize information needed to prepare a RID
letter. A RIDPad instance is a collection of items and
groups. An item is a superimposed information element
associated with a mark. It has a name and a description.
The name is user-defined and the description is the text
excerpt from the associated mark. A group is a convenient
collection of items and other groups.
Figure 2 shows a RIDPad instance with information concerning
the "Road 18 Caves" decision (made in the
Pacific Northwest Region of USFS). The instance shown
has eight items (labeled Summary, Details, Comparison
of Issues, Alternative A, Alternative B, Statement,
Details, and FONSI) in four groups (labeled
Environmental Assessment, Proposed Action, Other
Alternatives, and Decision). The group labeled
"Environmental Assessment" contains two other groups.
Figure 1: (Hand-drawn) Information summary as prepared by a resident prior to conducting
rounds in a hospital intensive care unit (used with permission)
72
The information in the instance shown comes from three
distinct base documents in two different base applications
. (The item labeled "Comparison of Issues" contains
an MS Excel mark; all other items contain MS Word
marks.) All items were created using base-layer support
included in the current implementation of SPARCE.
Figure 2: A RIDPad Instance
RIDPad affords many operations on items and groups. A
user can create new items and groups, and move items
between groups. The user can also rename, resize, and
change visual characteristics such as colour and font for
items and groups. With the mark associated with an item,
the user can navigate to the base layer if necessary, or
browse the mark's context from within RIDPad via the
Context Browser (as shown in Figure 3). Briefly, the
Context Browser is a superimposed application window
with information related to a mark. Figure 3 shows the
Context Browser for the item labelled "FONSI". From
the context elements listed on the left we see that this
item has both content and presentation kinds of context
elements. The browser displays the value of the selected
context element to the right. The formatted text content is
currently selected and displayed in the
browser.
Figure 3: Context of a RIDPad Item
RIDPad superimposes a simple conceptual model over
the selected base information with Group and Item as the
only model constructors. A group contains a name, size,
location, and an ID. An item contains a name,
description, size, location, and an ID. Items can occur
within a Group and Groups can be nested within a Group.
Figure 4 shows the model as a UML Class Diagram. The
class RIDPadDoc represents the RIDPad instance which
includes information that will likely be used to prepare
the RIDPad document.
Figure 4: RIDPad Information Model (Simplified)
2.2 Schematics Browser
Appeal letters from different appellants in the USFS appeal
process tend to share features. They all contain
appellant names and addresses, refer to a Decision
Notice, and raise issues. Such similarities suggest a
schema for appeal letters. A superimposed schematic is
an E-R schema superimposed over base information
[Bowers 2002]. The Schematics Browser (see Figure 5) is
a superimposed application that demonstrates the use of
superimposed schematics. It is meant to allow USFS
personnel to consider a set of appeal decisions to look for
important issues or trends. The Schematics Browser
might be used to support strategic planning activities.
Figure 5: Schematics Browser
Name
RIDPadDoc
ID
Name
Size
Location
Group
ID
Name
Description
Size
Location
Item
Belongs to
0..1
*
0..1
*
0..1
Contains
*
0..1
*
ID
Address
Mark
73
Figure 5 shows an instance of a USFS appeal decision
schematic opened in the Schematics Browser. The upper
left frame lists instances of the appeal decision schematic.
The user can select one of these instances, and then use
the large middle frame to browse through information
associated with the decision. The "1997 Ranch House
Timber Sale" appeal decision is selected in Figure 5. This
schematic allows the user to easily browse from a particular
issue to the appeal letter(s) where the issue was
raised to the appellant who raised the issue, for example.
Marks into any number of base sources can be associated
with entities, relationships, and attributes (but only one
mark per entity and attribute). When an entity,
relationship, or an attribute has an associated mark, a user
can either visit the base layer or choose to view the
excerpt from within the browser.
Figure 6 shows a simplified version of the information
model the Schematics Browser uses in superimposing the
E-R model over base information. The browser stores all
superimposed information in a relational database. This
structure is a simple generic model that accommodates
arbitrary Entity-Relationship style schematics.
Name
Schematic
ID
Name
Description
Entity
ID
Name
Value
Attribute
1
1..*
1
*
1
*
ID
Address
Mark
ID
Name
SchematicInst
ID
Address
Mark
Figure 6: Schematics Browser's Information Model
Figure 7 uses the Schematic Browser's meta model to
show a partial superimposed schematic instance. It shows
an instance of the "1997 Ranch House Timber Sale" appeal
decision schematic (also shown in Figure 5) and an
Issue entity. It also shows the two attribute instances,
desc
and
number
, of the Issue entity. The
desc
attribute
is associated with a mark instance (ID 41). In this
simple implementation, the schematic instance data has
its corresponding type information stored in the
Name
field.
2.3 Impact of Superimposed Information on
Conceptual Model(s)
Superimposed information introduces one significant
modeling construct the mark. The mark spans between
information at the superimposed layer and information in
the various base layer sources. The mark thus serves as a
bridge between the conceptual model used in the superimposed
layer and the conceptual model used in a base
information source.
Name = Appeal Decision
: Schematic
ID = 2
Name = 1997 Ranch House Timber Sale
: SchematicInst
ID = 1
Name = Issue
Description = Failed to meet Treaty and trust obligations
: Entity
ID = 1
Name = desc
Value = The Forest Service i...
: Attribute
ID = 2
Name = number
Value = 1
: Attribute
ID = 41
Address = Win1997.pdf|1|79|115
: Mark
Figure 7: Partial Superimposed Schematic Instance
In the RIDPad application, the superimposed model consists
of groups and items, where groups can be nested.
This model is somewhat like a simplified XML model
where groups are analogous to elements. But one important
difference is that items contain marks, as opposed to
PCDATA or other content. In a similar manner, the
Schematics Browser uses a superimposed model that is
similar to an entity-relationship model, but marks may
appear as attribute values. In addition, each entity and
relationship instance may be anchored, i.e., may be in
one-to-one correspondence with a mark.
Any superimposed application, by definition, includes
marks in the superimposed layer. Thus, the conceptual
model used in the superimposed layer must, necessarily,
be extended to include marks in some manner.
The use of marks has no impact on the conceptual model
of the base layer. In fact, the use of marks, in general,
requires no change to the base information or the base
application. Marks encapsulate an address to an information
element in the base source. Thus, the use of
marks requires an addressing scheme for each base source
that participates in a superimposed application. The addressing
scheme may exploit the data model of the base
information source. As an example, we could use XPath
expressions to address information elements in an XML
document. It is also possible to use addressing schemes
that are independent of the data model used in the base
information source. For example, a MS Word document
could be converted to a PDF document and a user could
create a mark using a bounding box where the interior of
the box contains parts of individual characters. Regardless
of the addressing scheme used in a mark, the superimposed
layer is shielded from the details of the
addressing scheme as well as the details of the conceptual
model used in the base information source.
Excerpts and Contexts
Superimposed applications may want to incorporate contents
of base-layer elements in the superimposed layer.
For example, an application might use the extracted base-layer
content as the label of a superimposed element. We
call the contents of a base-layer element an excerpt. An
excerpt can be of various types. For example it may be
plain text, formatted text, or an image. An excerpt of one
74
type could also be transformed into other types. For example
, formatted text in a word processor could also be
seen as plain text, or as a graphical image.
In addition to excerpts, applications may use other information
related to base-layer elements. For example, an
application may group superimposed information by the
section in which the base-layer elements reside. To do so,
the application needs to retrieve the section heading (assuming
one exists) of each base-layer element. We call
information concerning a base-layer element, retrieved
from the base layer, its context. Presentation information
such as font name and location information such as line
number might be included in the context of a mark. The
context of a base-layer element may contain more than
one piece of information related to the base-layer element
. Each such piece of information is a context element
(and context is a collection of context elements).
Figure 8: A Base-Layer Selection
Figure 8 shows a fragment of an HTML page as
displayed by a web browser. The highlighted region of
the fragment is the marked region. Table 1 shows an
excerpt and a few context elements of this marked region.
The column on the left lists names of context elements
whereas the column on the right shows values of those
context elements.
Name Value
Excerpt
Cheatgrass, Bromus tectorum, grows near many
caves in this project area.
HTML
Cheatgrass, <i>Bromus tectorum </i>,
grows near many caves in this project
area.
Font name
(Inherited)
Times New Roman
Font size
(Inherited)
12
Table 1: Sample Context Elements of an HTML Mark
Note that superimposed applications may access context
information that a user might not explicitly access (or
even be aware of). For example, consider the marked
region shown in Figure 8. The HTML markup for this
region (shown in Table 1) does not contain font
information. If a superimposed application needs to
display the mark's excerpt exactly as it is in the base
layer, the application needs to examine the markup of the
enclosing element, possibly traversing to the beginning of
the document (because font characteristics can be
inherited in HTML). The superimposed application may
also need to examine the configuration of the Web
browser to retrieve some or all of the format
specification.
Several kinds of context are possible for a mark. The
following is a representative list of context kinds along
with example context elements for each kind.
Content: Text, graphics.
Presentation: Font name, color.
Placement: Line number, section.
Sub-structure: Rows, sentences.
Topology: Next sentence, next paragraph.
Container: Containing paragraph, document.
Application: Options, preferences.
Contexts can vary across base-layer types. For example,
the context of a mark to a region in a graphics-format
base layer might include background colour and foreground
colour, but not font name. However, the context
of a mark to a selection in a web page might include all
three elements. Contexts can also vary between marks of
the same base-layer type. For example, an MS Word
mark to text situated inside a table may have a "column
heading" context element, but a mark to text not situated
in a table does not include that context element. Lastly,
the context of a mark itself may change with time. For
example, the context of a mark to a figure inside a document
includes a "caption" context element only as long as
a caption is attached to that figure.
Supporting excerpts and contexts for marks are a natural
extension of our original notion of mark as an encapsulated
address. Because we use the same mechanism to
support both contexts and excerpts, we will often use the
term "context" broadly to refer to both kinds of information
about a base-layer element.
Accessing information inside diverse base-layer types
requires superimposed applications to work with a variety
of base information models, addressing mechanisms, and
access protocols. In addition, base applications may have
different capabilities. For example, base applications may
vary in their support for navigation or querying, but users
of superimposed applications may want to navigate
through selected base information elements seamlessly
and uniformly, e.g., using the Schematics Browser. We
use middleware to ease communication between the two
layers and make up for deficiencies of base applications.
And we want the middleware to allow independent evolution
of components in these layers.
By providing a uniform interface to base information and
its context, the middleware reduces the complexity of
superimposed applications and allows superimposed
application developers to focus on the needs of their applications
such as the intricacies of the conceptual model
they aim to superimpose.
SPARCE
The Superimposed Pluggable Architecture for Contexts
and Excerpts (SPARCE) is a middleware for mark and
context management [Murthy 2003]. It is designed to be
extensible in terms of supporting new base-layer types
75
and contexts, without adversely affecting existing superimposed
applications.
Figure 9: SPARCE Reference Model
Figure 9 shows a reference model for SPARCE. The
Mark Management module implements operations such
as mark creation. It also maintains a repository of marks.
The Context Management module is responsible for retrieving
context of base information. This module
depends on the Mark Management module to locate information
inside base layers. The Clipboard module is
modelled after the Clipboard object in operating systems
such as Macintosh and MS Windows. The Superimposed
Information Management module provides storage service
to superimposed applications. We have developed a
generic representation for information, called the Uni-Level
Description [Bowers 2003], that can represent
information (including superimposed information) structured
according to various data models or representation
schemes, such as XML, RDF or database models, in a
uniform way. In this architecture, superimposed applications
can choose whether they use this module for
storage, or another storage manager.
4.1 Key Abstractions
Table 2 provides a brief description of the classes and
interfaces SPARCE uses for mark and context management
. SPARCE supports context for three classes of
objects: marks, containers, and applications (using the
classes Mark, Container, and Application respectively). A
Container is an abstraction for a base document (or a
portion of that document). An Application is an abstraction
for a base application. SPARCE also defines the
interface Context-Aware Object to any base-layer element
that supports context. The classes Mark, Container, and
Application implement this interface. Superimposed
applications use the class SPARCE Manager to create
new marks and to retrieve existing marks. The SPARCE
Manager maintains a repository of marks.
SPARCE treats context as a property set (a collection of
name-value pairs). Context is the entire set of properties
of a base-layer element and a context element is any one
property. For example, the text excerpt and font name of
a mark are context elements. Modelling context as a
property set makes it possible to support a variety of
contexts, both across and within base layers, without affecting
existing superimposed applications. This model
also provides a uniform interface to context of any base-layer
element, for any base-layer type.
SPARCE uses the interface Context Agent to achieve its
extensibility goal. A class that implements this interface
takes a context-aware object and returns its context. That
is, SPARCE does not access base-layer elements or their
contexts directly. It uses external agents to do so on its
behalf. However, SPARCE is responsible for associating
a context-aware object with an appropriate context agent.
The SPARCE Manager obtains the name of the class that
will be the context agent for a mark from the description
of the marks. The SPARCE Manager instantiates the
context agent class by name whenever a superimposed
application accesses the context of a context-aware object
. Typically, there is one implementation of the context
agent interface per base-layer type. For example, a PDF
Agent is an implementation of this interface for use with
PDF documents. A context agent implementation determines
the constitution of context for its context-aware
objects. SPARCE does not require an implementation to
support particular context elements (nor does it prevent
an implementation from defining any context element).
However, we expect implementations to support kinds of
context elements commonly expected (such as those
listed in Section 3), and use meaningful names for context
kinds and elements.
Class/Interface Description
Mark
A mark to base-layer information.
Container
The base document (or a portion of it)
in which a mark is made.
Application
The base application in which a mark is
made.
Context-Aware
Object (interface)
Interface to any base-layer element
able to provide context. Classes Mark,
Container, and Application implement
this interface.
Context
Context of a context-aware object. It is
a collection of context elements.
Context Element
A single piece of context information
about a context-aware object.
Context Agent
(interface)
Interface to any base-layer. An implementation
will retrieve context from a
context-aware object.
SPARCE Manager
Creates, stores, and retrieves marks;
associates context-aware objects with
appropriate context agents.
Table 2: SPARCE Classes and Interfaces
4.2 Creating Marks
A user initiates mark creation after selecting some information
in a base application. The mark creation process
consists of two steps: (1) generating the address of the
selected base information, perhaps with other auxiliary
information (collectively called mark fodder) and (2)
creating a mark object in the mark repository. The address
contained in mark fodder uses the addressing
mechanism appropriate for the base information source.
For example, the address to information inside a PDF
document contains the page number and the starting and
ending word indexes; the address to a selection in a
spreadsheet contains the row and column numbers for the
first and last cell in the selection. (Other addressing
schemes are possible for these base types.)
Superimposed
Application
Superimposed
Information
Management
Mark
Management
Context
Management
Clipboard
Base
Application
76
Figure 10 depicts two possible mark-creation scenarios as
a UML Use Case Diagram. (The boxes in this figure
denote system boundaries; the broken arrows denote
object flows.) In both scenarios, a user starts mark
creation in a base application and completes it in a
superimposed application. In the first scenario, labelled
"Copy", the user is able to use the normal copy operation,
e.g., of a word processor, to create the mark fodder. In
the "Mark" use case, the user invokes a newly introduced
function (such as the Mark menu item shown in Figure
8). The superimposed application retrieves the mark
fodder from the Clipboard, and passes it to the SPARCE
Manager. The SPARCE Manager creates a mark object
(from the fodder), assigns it a unique ID, stores it in the
mark repository, and returns the new object to the
superimposed application.
Copy
User
Mark
Base Application
Clipboard
Operating System
Complete
Superimposed
Application
Figure 10: Two Mark-creation Scenarios
The first scenario allows a user to select base information
in a preferred base application and copy it to the
Clipboard without having to learn any new application,
tool, or process to create marks. However, supporting this
scenario requires cooperative base applications such as
Microsoft Word and Excel. Some base applications do
not directly support Clipboard operations, but they
provide mechanisms (such as plug-ins or add-ins) to
extend their environments. A special mark creation tool
or menu option can be inserted in to the user interface of
such applications. The Mark use case in Figure 10
demonstrates this scenario. Early versions of Adobe
Acrobat and Netscape Navigator are examples of base
applications in this category.
Figure 11 shows the internal representation of a mark.
This mark corresponds to the selection in the HTML page
shown in Figure 8. Superimposed applications do not
have visibility of a mark's internal representation. They
simply use the mark's interface to access its details.
4.3 Accessing Marks and Context
A superimposed application sends a mark ID to the
SPARCE Manager to retrieve the corresponding mark
object from the marks repository. The SPARCE Manager
instantiates an implementation of the context agent interface
that is appropriate for the mark. The superimposed
application can work with the mark object directly (for
example, to navigate to the base layer) or can interact
with the mark's context agent object (for example, to retrieve
mark context).
With a context object in hand, a superimposed application
can find out what context elements are available. It can
also retrieve values for context elements of interest. The
superimposed application may use a context-element's
value in various ways. For example, it may use the text
content of the mark as a label, or it may apply the font
characteristics of the marked region to some
superimposed information.
<Mark ID="HTML2003Apr22065837YZXsmurthy">
<Agent>HTMLAgents.IEAgent</Agent>
<Class>HTMLMark</Class>
<Address>4398|4423</Address>
<Description/>Noxius Weeds in ea1.html
</Description>
<Excerpt>Cheatgrass, Bromus tectorum,
grows near many caves in this project
area.</Excerpt>
<Who>smurthy</Who>
<Where>YZX</Where>
<When>2003-04-22 06:58:37</When>
<ContainerID>cdocsea1html</ContainerID>
</Mark>
Figure 11: Internal Representation of a Mark
For ease of use, our design also allows the application to
retrieve the value of a context element from the context-aware
object or even from the context-agent object. An
application developer may choose the access path that is
most convenient to his or her particular situation.
4.4 Implementation
We have implemented SPARCE for Microsoft-Windows
operating systems using ActiveX technology [COM]. The
current implementation includes support for the following
base applications: MS Word, MS Excel, Adobe Acrobat
(PDF files), and MS Windows Media Player (a variety of
audio/video file types). The agents for these base applications
support the following kinds of context: content,
presentation, containment, placement, sub-structure,
topology, document, and application. (Some possible
context elements of these kinds are listed in Section 3.)
We have implemented reusable view facilities such as the
Context Browser to display the complete context of a
context-aware object, and tabbed property pages to display
properties of context-aware objects. We have also
implemented a few testing aids. For example, we have
implemented a generic context agent with limited
functionality (that can be used with any base-layer type)
to test integration with new base-layer types. The Context
Browser is also a good testing tool when support for a
new a base type is added or when definition of context is
altered for a base type.
4.5 Extensibility
Supporting new context elements is straightforward in
SPARCE: The new context element name is just added to
the property set. Superimposed applications may ignore
the new context elements if they are not capable of
handling them.
Supporting new base-layer types is more involved. It requires
a developer to understand the base layer and its
addressing mechanisms. The developer must implement
the context agent interface for the base-layer type. And
the developer must implement a means to allow users to
77
select regions within this type of base information and
copy mark fodder to the Clipboard. As we mention in
Section 4.2, the developer might be able to exploit extensibility
mechanisms of base applications for creating
mark fodder.
We have used the extensibility mechanism to add support
for MS Word, MS Excel, Adobe Acrobat, and MS
Windows Media Player. It took us about 7-12 hours to
support each of these base types. The SPARCE implementation
and the superimposed applications were not
changed or recompiled when new base types were added.
4.6 Evaluation
Our observations show that developing superimposed
applications with SPARCE is relatively easy. Although
the effort required to develop a superimposed application
depends on the specifics of that application, using abstractions
such as marks and contexts alleviate the need to
model those entities in each application. For example,
RIDPad is a complex application due to its graphical nature
and the variety of operations it supports. However,
we were able to develop that application in approximately
30 hours. As we added support for new base types using
the extensibility mechanism of SPARCE, RIDPad was
able to automatically work with the new base types.
The original Schematics Browser application worked
only with PDF files. The application was responsible for
managing marks and interacting with Adobe Acrobat.
The application had no context-management capabilities.
We altered this application to use SPARCE and it
instantaneously had access to all base-layer types
SPARCE supported (and those it will support in future).
In addition, it also had access to context of base
information. In less than 7 hours, we were able to alter
the Schematics Browser to use SPARCE.
There are many ways to deploy the components of
SPARCE and its applications (based on application and
user needs). For example, RIDPad is expected to be a
single-user application. Thus, all components of RIDPad
and SPARCE may run on a single computer. In contrast,
the Schematics Browser is likely to be used by many
USFS personnel to browse schematic instances of past
appeal decisions. That is, shared repositories of superimposed
information and marks can be useful. Based on
such analyses, we are currently in the process of evaluating
different deployment configurations of SPARCE and
its applications. In addition to studying performance of
these configurations, we intend to explore the benefits of
caching context information.
Issues in Context Representation
One of the areas of SPARCE design we are still exploring
is the representation of context. We have considered defining
contexts via data types (say, a context type for each
base type), but feel that approach would be too restrictive.
The set of context elements available for a mark might
vary across a document. For example, a mark in a Word
document might have a "column name" context element
if it is in a table, but not otherwise. It is even possible
that the context elements available for a single mark may
change over time. For instance, the "image" context element
might only be available while the invocation of the
base application in which the mark was originally created
is still running. A context type could define all possible
context elements, where a particular mark produces null
values on elements undefined for it, but that approach
complicates the application programming interface (especially
for context elements of scalar types such as
numbers and strings). Another issue with types is making
it possible to write a superimposed application without
specifying in advance all the base sources it will be used
with (and their context types). We have demonstrated
with our current approach the ability of a superimposed
application to work with new context agents without
modifying the application. The superimposed application
can make use of any context elements it knows about
(from the elements the new agent supplies). While
inheritance schemes can support some polymorphism in
types, they do not seem adequate to support the arbitrary
kinds of overlap we have seen among context elements
across base types.
Another issue is the internal structure of a context. Cur-rently
a context is a property set of context elements,
where each element is a name-value pair. Context elements
also have kinds (such as presentation and
substructure), which allows grouping context elements in
user interfaces. We are considering giving contexts an
explicit hierarchical structure. There are several alternatives
for such an approach: Make a context a compound
object capable of holding sub-contexts, use of qualified
names (for example, format.font.fontsize), or employ a
hierarchical namespace as in a directory structure. We do
not see great differences in these three alternatives. The
advantage of some kind of hierarchical structure, however
, versus the current flat structure might come in the
interface between superimposed applications and the
context agent. Rather than the application asking for
context elements individually (or for all context elements
), it could ask for a particular subgroup of elements
of interest.
A methodological issue related to context structure is how
to coordinate the naming of context elements across multiple
base types and multiple superimposed applications.
There is no requirement currently that the "same" context
element be named the same thing for different base types
(or, in fact, in alternative context agents for the same base
type). Even if the same name is used, the types of the
associated values could be different. With an individual
or small group writing context agents and superimposed
applications, informal methods will work for consistency
in naming. However, a more structured process will be
needed at the point that context agents and superimposed
applications are being produced by different organizations
Related Work
Memex and Evolutionary List File were visionary proposals
for organizing information from diverse sources
[Bush 1945, Nelson 1965]. Hypertext and compound
document models are two classes of systems that attempt
to realize these visions. Hypertext systems are helpful in
78
preparing information for non-linear media. Although
designed to help organize information, they tend to be
limited in the types of source, granularity of information,
and location of information that can be organized. For
example, NoteCards and Dexter both require information
consulted to be stored in a proprietary database [Halasz
1987, 1994]. Intermedia can address base information
only at sub-document granularity [Yankelovich 1988].
Hypertext systems typically do not support retrieval of
contextual information from sources.
Compound document systems are helpful in preparing
information for linear media (such as paper). They can
address base information at both document and sub-document
granularity, but they tend to constrain display
models of tools developed. For example, OLE 2 requires
rectangular display areas [COM]. Like SPARCE (and
unlike hypertext systems), compound document systems
provide architectural support for building applications.
Compound document systems support only retrieval of
contents. Information sources decide the content, its format
, and geometry.
Table 3 provides a brief comparison of SPARCE with
hypertext and compound document systems. NoteCards,
Intermedia, and Dexter are hypertext systems. OpenDoc
[Apple 1994] and OLE 2 are compound document systems
.
No
teC
a
r
d
s
In
te
rm
e
d
ia
De
x
t
er
Op
e
n
D
o
c
OLE
2
SPAR
C
E
Base
types
2
3
Any Any Any Any
Base
location
Custom
Files Custom Any Any Any
Base
granularity
Whole Part Both Both Both Both
Context
kinds
None None
None Content
Content
Many
Table 3: SPARCE Compared with Related Systems
Multivalent documents [Phelps 2000b] allow multiple
behaviours to be superimposed on to a single base document
using an abstraction similar to the context-agent
interface in SPARCE. The system uses contents of a
region of interest (and its surrounding), but only to
address that region [Phelps 2000a].
In the area related to dynamism and representation of
context, OLE Automation [Microsoft 1996] provides an
interesting comparison to our approach. An OLE automation
object exposes an interface to the type information
object (ITypeInfo) that corresponds to itself. The type
information object is resident in a type library (that contains
type information for possibly many automation
object types). Changing type information (deleting members
or adding new members) requires creation of a new
type-information object and a new type library. Although
the framework allows each instance of an object type to
return a different type-information object, the requirement
to create new type information and a type library
makes it impractical to do so. Consequently, type information
of an OLE automation object tends to include all
possible elements, without regard to whether those members
are relevant in a given situation. For example, the
type information for a Range object of a MS Word document
contains over 30 members [Microsoft]. The value of
a member of scalar type that is not applicable for a given
Range object will be equivalent of NULL (and the inapplicable
collection-type members will be empty). In
SPARCE, the context of a mark contains only those elements
that apply to the mark.
It might seem that links in OLE 2 compound documents
provide similar functionality to marks. An OLE 2 compound
document supports only retrieval of contents from
links. It does not provide a mechanism from within a
compound document to obtain the OLE automation object
that corresponds to a link (even when the link source
defines an automation object corresponding to the region
the link represents). As a consequence, context-like information
about the linked region cannot be accessed via
the link directly. For example, linking a selection S from
the main body of MS Word document D1 into Word
document D2 makes D2 a compound document. However
, the Range object for S (which is available in D1) is
not accessible through the link in D2. A user needing
more information about S must navigate to the source
document D1. SPARCE not only provides the ability to
link information via marks, it also provides access to
context of the mark through the mark itself.
Discussion and Future Work
One way to view our work is that we have extended the
standard modelling building blocks (integers, floats,
dates, strings, etc.) with a new primitive--mark--that
encapsulates an information element from an external
source. A conceptual model (extended to be a superimposed
model) can permit the use of marks in any of its
structuring constructs (tuples, relationships, attributes,
entities, etc.), without regard to the complexities of the
underlying element. Support for context allows superimposed
applications to extract information from that
element and its surrounding elements or the information
source in a controlled manner, to augment what is explicitly
stored in the superimposed model.
As a means to provide "new models for old data," our
approach is quite different from data integration approaches
such as mediators and data warehouses. Such
approaches seek to provide an integrated view through a
global schema describing base information that faithfully
reflects the structure of the base source. In our work, we
are exploring the use of selected base information
elements (using marks). Note that the selection of marks
is often performed manually, by a domain expert (e.g., a
clinician or a USFS scientist), for a specific purpose (e.g.,
to treat a patient or prepare a RID). We have no
requirement to represent the structure or relationships
present within the base layer. Rather, we rely on the
original application to provide interpretation for a mark
and, if appropriate, to describe any relationships among
marks. Standard integration approaches describe information
from various sources and expect the mediator to
be responsible for its interpretation.
79
The superimposed layer, by definition, allows the user to
mix marks with additional information that may not exist
in any of the base information sources. Such information
may augment the base layer, e.g., by making implicit information
explicit (e.g., "this issue relates only to
Alternative A") or by providing commentary. Another
use of superimposed information is to link related
information from multiple sources, e.g., by placing marks
in the same group or by explicitly linking between
information elements in two sources. Finally, the
superimposed approach permits reinterpretations that are
much less structured than the original. For example, base
information elements can be grouped or linked without
having to observe any type constraints imposed in the
original source.
Exploring different representations of context and ways
to reconcile context definition from different context
agents is one area of our future work. Understanding the
needs of new superimposed conceptual models (other
than those we have described), and exploiting contexts to
superimpose richer conceptual models is another area of
our interest. A natural application of superimposed conceptual
models would be to create means of querying
jointly over superimposed and base information. We are
also interested in superimposed applications that facilitate
"schema later" organization of diverse information. That
is, a user can start accumulating and arranging information
items of interest, and--as he or she starts forming a
mental conceptual model--incrementally define a
superimposed model that reflects it.
Acknowledgements
This work has been supported by US NSF grants IIS
9817492 and IIS 0086002. We thank John Davis for
helping us understand the USFS appeal process. We also
thank the anonymous reviewers for their comments.
References
Acrobat SDK: Acrobat Software Development Kit,
Adobe Systems Incorporated.
Apple (1994): The OpenDoc Technical Summary. Apple
World Wide Developers Conference Technologies CD,
San Jose; CA.
Ash, J., Gorman P., Lavelle, M., Lyman J., Delcambre,
L., Maier, D., Bowers, S. and Weaver, M. (2001):
Bundles: Meeting Clinical Information Needs. Bulletin
of the Medical Library Association 89(3):294-296.
Bowers, S., Delcambre, L. and Maier, D. (2002): Superimposed
Schematics: Introducing E-R Structure for In-Situ
Information Selections. Proc. ER 2002, pp 90
104, Springer LNCS 2503.
Bowers, S. and Delcambre, L. (2003): The Uni-Level
Description: A Uniform Framework for Representing
Information in Multiple Data Models. Proc. of the 22
nd
International Conference on Conceptual Modeling (ER
2003), Chicago, IL, October 2003.
Bush, V. (1945): As We May Think. The Atlantic
Monthly; 1945; July.
Delcambre, L., Maier, D., Bowers, S., Weaver, M., Deng,
L., Gorman, P., Ash, J., Lavelle, M. and Lyman, J.
(2001): Bundles in Captivity: An Application of Superimposed
Information. Proc. ICDE 2001, Heidelberg
, Germany, pp 111-120.
Gorman, P., Ash, J., Lavelle, M., Lyman, J., Delcambre,
L. and Maier, D. (2000): Bundles in the wild: Managing
information to solve problems and maintain situation
awareness. Lib. Trends 2000 49(2):266-289.
Halasz, F.G., Moran, T.P. and Trigg, R.H. (1987):
NoteCards in a Nutshell. Proc. ACM CHI+GI Conference
, New York, NY, pp 45-52, ACM Press.
Halasz, F.G. and Schwartz, F. (1994): The Dexter Hypertext
Reference Model. Communications of the ACM,
37(2):30-39, ACM Press.
Maier, D. and Delcambre, L. (1999): Superimposed Information
for the Internet. Proc. WebDB 1999 (informal
), Philadelphia, PA, pp 1-9.
COM: The Component Object Model Specification, Microsoft
Corporation.
Microsoft Corporation. (1996): OLE Automation Pro- | superimposed information;SPARCE .;excerpts;Conceptual modelling;software architecture;context |
158 | Quality-of-Service in IP Services over Bluetooth Ad-Hoc Networks | Along with the development of multimedia and wireless networking technologies, mobile multimedia applications are playing more important roles in information access. Quality of Service (QoS) is a critical issue in providing guaranteed service in a low bandwidth wireless environment. To provide Bluetooth-IP services with differentiated quality requirements, a QoS-centric cascading mechanism is proposed in this paper. This innovative mechanism, composed of intra-piconet resource allocation, inter-piconet handoff and Bluetooth-IP access modules, is based on the Bluetooth Network Encapsulation Protocol (BNEP) operation scenario. From our simulations the handoff connection time for a Bluetooth device is up to 11.84 s and the maximum average transmission delay is up to 4e-05 s when seven devices join a piconet simultaneously. Increasing the queue length for the Bluetooth-IP access system will decrease the traffic loss rate by 0.02 per 1000 IP packets at the expense of a small delay performance. | Introduction
Wireless communications have evolved rapidly over the past
few years. Much attention has been given to research and
development in wireless networking and personal mobile
computing [10,17]. The number of computing and telecommunications
devices is increasing and consequently, portable
computing and communications devices like cellular phones,
personal digital assistants, tablet PCs and home appliances
are used widely. Wireless communication technologies will
offer the subscriber greater flexibility and capability than ever
before [14].
In February 1998, mobile telephony and computing leaders
Ericsson, Nokia, IBM, Toshiba, and Intel formed a
Special Interest Group (SIG) to create a standard radio interface
named Bluetooth [13]. The main aim of Bluetooth
has been the development of a wireless replacement for cables
between electronic devices via a universal radio link in
the globally available and unlicensed 2.4 GHz Industrial Scientific
and Medical (ISM) frequency band [9]. Bluetooth
technologies have the potential to ensure that the best services
, system resources and quality are delivered and used efficiently
. However, global services will embrace all types of
networks. Therefore, bluetooth-based service networks will
interconnect with IPv4/v6 existing networks to provide wide
area network connectivity and Internet access to and between,
individuals and devices [7]. In Reference [2], the BLUEPAC
(BLUEtooth Public ACcess) concepts presented ideas for enabling
mobile Bluetooth devices to access local area networks
in public areas, such as airports, train stations and supermarkets
.
The Bluetooth specification defined how Bluetooth-enabled
devices (BT) can access IP network services using
the IETF Point-to-Point Protocol (PPP) and the Bluetooth
Network Encapsulation Protocol (BNEP) [12,19,20].
By
mapping IP addresses on the corresponding BT addresses
(BD_ADDR), common access across networks is enabled [3].
This means that devices from different networks are allowed
to discover and use one another's services without the need
for service translation or user interaction. To support communications
between all Bluetooth-based home appliances and
the existing IP world, IPv6 over Bluetooth (6overBT) technology
was proposed [8]. The 6overBT technology suggested
that no additional link layer or encapsulation protocol headers
be used. However, the development of 6overBT technology
is still in progress. The BNEP protocol was referred to as the
key technology in our research.
What with the development of applications and wireless
networking technologies, mobile multimedia applications are
playing more important roles in information access [21].
To provide responsive multimedia services (high QoS) in a
Bluetooth-IP mobile environment, a pre-requisite for our discussion
is seamless data transmission. A cascading system
with fair resource allocation scheme, seamless handoff strategy
, and transparent bridging system that provides a way
of integrating IP networks and Bluetooth-based service networks
to relay multimedia applications within residual and
enterprise is thus inevitable [6].
The rest of this paper is organized as follows. The following
section describes Bluetooth background information,
including the piconet, scatternet, IP over Bluetooth service architecture
. Section 3 presents the proposed QoS-centric cascading
mechanism, including the intra-piconet resource allocation
, inter-piconet handoff, and Bluetooth-IP access system
. The simulation model and performance analysis of the
700
W.-C. CHAN ET AL.
queue length, loss rate, average delay, are introduced in section
4. Section 5 presents our concluding remarks.
Bluetooth-IP services
Figure 1 is a Bluetooth-IP service network environment.
When a BT user wants to receive a networked video stored
on a remote video server, the BT (maybe a slave in a picocell
) will initiate a connection procedure with the picocell
master. The master initiates a connection procedure with the
video server through the Bluetooth-IP access system. During
the connection state, the video stream will be fed through the
access system, master to BT user (the dotted line of figure 1).
2.1. Piconet
When two BTs establish a connection, one BT will act as
master and the other as the slave. More BTs can join the piconet
. A single piconet can accommodate up to seven active
slaves. If a slave does not need to participate in the channel
, it should still be frequency-hop synchronized and switch
to a low-power Park mode. The master can also request that
the slave enter the Park mode so the master can communicate
more than seven slave BTs. The master determines the
frequency-hop sequence, the timing and the scheduling of all
packets in the piconet.
Within a piconet, the master initiates the connection procedure
, although the application may necessitate that the master
and slave roles be exchanged. For instance, such an exchange
is necessary when a BT receives network services through
Bluetooth-IP access systems. In this circumstance, the access
system provides an IP service that may be used by many
other BTs. This situation requires that the access system be
the master of the piconet and the other BTs to act as slaves.
However, when the device is initially activated and looks for
an access system, it may be the device initiating the connection
. This will make the device the master and the access
system the slave [1].
2.2. Scatternet
Several piconets can be established and linked together to
form an ad-hoc network. This is called a scatternet. A BT
can participate in two or more overlaying piconets by applying
time multiplexing. To participate on the proper channel,
the BT should use the associated master device address and
proper clock offset to obtain the correct phase. A BT can act
as a slave in several piconets, but only as a master in a sin-Figure
1. Bluetooth-IP service network environment.
QUALITY-OF-SERVICE IN IP SERVICES
701
Figure 2. An inter-piconet node in the scatternet.
gle piconet: because two piconets with the same master are
synchronous and use the same hopping sequence. This syn-chronization
makes them one and the same piconet.
Time multiplexing must be used to switch between piconets
. In figure 2, an inter-piconet node is capable of time-sharing
between multiple piconets. This allows the traffic to
flow within and between the piconets [18]. In the case of data
links only, a BT can request to enter the Hold or Park mode
in the current piconet during which time it may join another
piconet by just changing the channel parameters. BTs in the
Sniff mode may have sufficient time to visit another piconet
in between the Sniff slots. If audio links are established, other
piconets can only be visited in the non-reserved slots.
2.3. IP networking
The LAN Access profile defines IP service access using
PPP over RFCOMM. TCP/IP runs on the PPP protocol.
BTs can receive IP services using the PPP protocol [20].
When a BT wants to receive IP service, it will find a
LAN Access Point (LAP) within radio range through inquiry
and paging. After the data links have been setup, the
LMP (Link_Manager Protocol) will process the master/slave
switch and the L2CAP/RFCOMM/PPP connection will then
be established. A suitable IP address is negotiated between
devices in the PPP layer. The BT can forward IP packets
through the LAP.
Encapsulating an Ethernet packet inside a PPP packet is
not an efficient solution. Moreover PPP is not sufficient for
ad-hoc networks that contain multiple wireless hops. The best
way of providing networking would be to Ethernet over the
L2CAP layer. The Bluetooth Network Encapsulation Protocol
(BNEP) was pursued by the Bluetooth SIG PAN working
group to provide an Ethernet-like interface to IP services [19],
as depicted in figure 3.
QoS-centric cascading mechanism
From figure 1, a BT that accesses multimedia services on
the Internet may do so through a piconet or scatternet. QoS
is critical in transmitting of different network segments. To
provide Bluetooth-IP services with differentiated quality requirements
, a QoS-centric cascading mechanism is proposed
Figure 3. BNEP protocol stack.
Figure 4. The proposed QoS-centric cascading modules.
in our research to tunnel the guaranteed applications. The operational
modules are illustrated in figure 4. The innovative
mechanism consists of three modules: intra-piconet resource
allocation, inter-piconet handoff and Bluetooth-IP access system
. These modules were developed based on the BNEP operation
scenarios.
3.1. Intra-piconet resource allocation
Two service types: Synchronous Connection Oriented (SCO)
and Asynchronous Connectionless Link (ACL) were defined
in the Bluetooth service environment. Through the QoS setup,
the ACL link can be configured to provide QoS requirements.
The ACL link can be configured with the Flush Timeout setting
, which prevents re-transmission when it is no longer
useful. Acknowledgement can be received within 1.25 ms.
This makes the delay small and it is possible to perform re-transmission
for real-time applications. The ACL link also
supports variable and asymmetrical bandwidth requirement
applications.
Currently, the IP QoS architecture is based purely on
IP-layer decision making, packet buffering and scheduling
through a single link-layer service access point. The mechanism
in the link layer has better understanding of the communications
medium status. However, simplicity has been
important design objective for the Bluetooth interface and the
number of IP-based protocols is becoming increasingly more
complex. As depicted in figure 5, QoS architecture at the
network layer such as differentiated and integrated services
provides different services to applications. These services at
702
W.-C. CHAN ET AL.
Figure 5. Bluetooth IP QoS mechanism.
Figure 6. General QoS framework.
the high layer are sufficient depending on the particular scenario
. With shared bandwidth and a re-transmission scheme,
the Host Controller (HC) buffering will invoke delays. The
HC buffer size can be decreased to reduce the buffer delay.
In bluetooth-based service layer, the L2CAP layer informs
the remote side of the non-default parameters and sends a
Configuration Request. The remote L2CAP-side responds
to the Configuration Response that agrees or disagrees with
these requirements. If the Configuration Response disagrees,
the local side sends other parameters to re-negotiate or stop
the connection. The Link Manager uses the poll interval and
repetitions for broadcast packets to support QoS. The poll interval
, which is defined as the maximum time between subsequent
transmissions from the master to a particular slave on
the ACL link, is used to support bandwidth allocation and latency
control. Figure 6 depicts the general framework, which
defines the basic functions required to support QoS.
In figure 6, the traffic and QoS requirements for the QoS
flow from the high layer sends the request to the Resource
Requester (RR). Based on this request, the RR generates a resource
request to the Resource Manager (RM). When the RM
accepts the request the RR configures parameters to the local
Resource Allocation (RA) entity. The RA actually reserves
resources to satisfy the QoS requirements. The QoS is satisfied
by the application that receives the appropriate resource.
In our scheme, bluetooth-based operation identifies the following
functions and procedures to determine the amount
of resources assigned to a traffic flow. A polling algorithm
determines which slave is polled next and bandwidth is assigned
to that slave. The slave uses the air-interface scheduler
to determine which data to send when it is polled. An
inter-piconet scheduling algorithm is used by the inter-piconet
node to efficiently control the traffic flow between two piconets
. Bluetooth also uses the Flush Timeout setting to determine
the maximum delay involved with re-transmissions.
The Link Manager module in the master selects the baseband
packet type for transmission in the single, three and five time
slots [4].
3.2. Inter-piconet handoff
The inter-piconet environment suffers many challenges. First,
the formation of Bluetooth networks is spontaneous and the
problem of scatternet formation has not been defined in the
Bluetooth specification [16]. Some researches have addressed
these issues in the formation of an efficient scatternet [5,22].
The data sent forward between nodes must been sent via the
master. Sometimes this data will traffic through multiple hops
in the scatternet. Efficient routing protocols are needed for
Bluetooth. The inter-piconet node as the bridge over which a
piconet controls communications between piconets. Smarter
traffic control is needed to coordinate with these masters. Different
data rates exist in each link in different scenarios. The
inter-piconet node becomes the bottleneck for the scatternet.
In a piconet, the master controls all of the slaves in the piconet
. The inter-piconet node joins more than two piconets,
but it is only active in one piconet at a time. To efficiently
move traffic between different piconets scheduling is needed
to coordinate the inter-piconet and the master. In Reference
[11], inter-piconet (IPS) and intra-piconet scheduling (IPRS)
were presented to interact with one another to provide an efficient
scatternet scheduler, as illustrated in figure 2. The IPS
and IPRS must coordinate with one another to schedule when
the inter-piconet node belongs to which piconet and how and
when to transfer data packets between the different masters.
Bluetooth connection progress includes two steps: the inquiry
progress and the paging progress. This causes the bottleneck
in the handoff. Two situations were discussed in
BLUEPAC. When the Access Point is the master, the mobile
node joins the piconet as a slave. The Access Point can efficiently
control the traffic to the Internet. However, the disadvantage
is that the Access Point must periodically enter the
inquiry stage to find the newly arrived mobile node. This will
interrupt the Access Point transfer a packet to the Internet. In
a situation in which the BT is the master, the Access Point
involves itself in multiple piconets. The BT can actively inquire
the Access Point when it wants to connect to the Access
Point. However, the traffic control becomes difficult and
complex when the Access Point must switch between various
piconets.
The scheduler is still not supported for seamless handoffs
in real time applications. To solve this problem, reference
[15] proposes the Next Hop Handoff Protocol (NHHP) to support
fast handoffs. The major focus is on reducing the connection
time in the strategy. A scheme that passes the inquiry
information to the next Access Point was used to avoid wasting
time in the inquiry stage. The disadvantage is that the
Access Points are divided into two categories: Entry Points
QUALITY-OF-SERVICE IN IP SERVICES
703
which constantly make inquiries to the new BT and pass information
to the Access Points in the internal regions have the
responsibility in the handoff. If a newly arrived BT is initiated
in an internal region, the scheme doesn't work.
To resolve the above problem, a fast handoff scheme was
proposed. This scheme assumes Bluetooth service environment
is a micro-cellular network architecture. However, it
also adapts Bluetooth as a macro-cellular network. Based on
the fast handoff mission, the major focus is reducing the connection
procedure that causes significant delay. We assumed
the following conditions; the Access Point and mobile BT periodically
scan for page attempts and inquiry requests. To obtain
seamless and efficient handoff support, the Access Point
RF range should cover the nearby Access Point. The neighborhood
set records all Access Point locations.
3.2.1. Connection procedure
As depicted in figure 7, when the mobile BT accesses the Internet
it initiates an inquiry to the Access Point and makes a
connection. The Access Point passes the BT addresses and
clock information to the nearest Access Point according to
the neighborhood set. The nearest Access Point uses this
information to page the BT and form the piconets. These
piconets form the scatternet and the BT becomes the inter-piconet
node between them. The BT depends on the received
signal strength indicator (RSSI) to determine which Access
Point to use to access the Internet. The BT is a dedicated
Access Point only in the connection state. The remaining piconets
are all in the Hold state.
3.2.2. Handoff
The BT periodically monitors the RSSI and bit error rate.
When the RSSI decreases to the lower threshold a handoff
may take place. To know where the mobile BT moves, the
BT detects which RSSI becomes stronger. It then informs the
Access Points and Foreign Agent that a handoff is imminent
(figure 8(A)). The BT leaves the Hold state to the connection
state in the piconet that contains the coming Access Point
(AP0 in figure 8(B)). The routing path also changes to a new
path. Additional caches may be needed in the Access Point
to avoid packet losses. The new nearest Access Point (APa
in figure 8(C)) in accordance with the neighbor set is notified
that the mobile BT is within range to receive BT information
. It begins to page the BT and enter the Hold state with
the BT. The original connection state also turns into the Hold
state in the piconet that contains the original access (AP1 in
figure 8(D)).
When the BT does not seek access service from the Access
Point it should inform the Access Point that it no longer
wants a connection. A connection could break down without
prior warning. In the Bluetooth specification, both the master
and slave use the link supervision time to supervise the loss.
The supervision timeout period is chosen so that the value is
longer than the hold periods.
For simplicity of discussion we assumed that the Bluetooth
AP is the application sender and divided the architecture into
two parts, wired parts: the correspondent node (CN) to the
Bluetooth AP and wireless part: the Bluetooth AP to the mobile
BT device. The wired part is the same as the current
general mechanism. We will only discuss the mechanism
that combines the wireless part and our handoff mechanism
in the Bluetooth environment. After making a connection and
switching roles as mentioned earlier, the Bluetooth AP becomes
a master. As illustrated in figure 9, the Bluetooth AP1
sends an active PATH message to the BT in figure 9(1). After
the BT receives the PATH message, if the BT needs RSVP
Figure 7. Connection procedures.
704
W.-C. CHAN ET AL.
Figure 8. Handoff procedures.
Figure 9. RSVP messages for bluetooth resource reservation.
support, it sends a resource reservation request with a RESV
message to the AP1 in figure 9(2). When the traffic specification
contains a RSVP message, the traffic and QoS requirements
for the QoS flow from the high layer sends a request to
the Resource Requester (RR). The Bluetooth low levels will
thus coordinate with one another.
Once the Bluetooth AP1 accepts the request, the reservation
along the flow between the AP1 and BT is made. After
this point, the Bluetooth AP1 must have reservations in
the neighboring APs. The resource reservation request is the
same as an active reservation. The current AP1 sends a Passive
PATH message to the neighboring AP2. The AP2 responds
with a Passive RESV message and reserves the resources
that the BT may need. Because Bluetooth can have
only seven active slaves in a piconet at the same time, the
resources must be used efficiently. One way is adding more
Bluetooth devices in an AP. This can be easily achieved by
modifying the application layer.
As discussed before, to support seamless handoffs, information
about the BT is sent to the AP2 after the RSVP process
is performed. The Hold state is maintained between the BT
and AP2. However, if the BT does not need a QoS guarantee,
Figure 10. The protocol stack for Bluetooth-IP interworking.
the QoS mechanism is not added. After supporting the end-to
-end QoS using the RSVP protocol, the packet classification
and scheduler can be used to control the traffic.
3.3. Bluetooth-IP access system
Th difference between existing Bluetooth piconets and IP
LANs is the communication protocol stack illustrated in figure
10. From figure 10, these differences are shown in the
lower OSI seven layers. The lower layers are responsible for
connection and addressing. We therefore focused on the connection
management and address resolution issues in our research
.
The access function allows connections to be established
without requiring any particular knowledge or interaction.
The Bluetooth-IP access system plays the role of bridg-ing/routing
multimedia traffic between the various LANs and
piconets. Their operational scenario is illustrated in figure 11.
When different piconet devices are connected directly to
the access system, the access system function is referred to
as a bridge (piconet Master and Slave role). If a LAN host
(piconet BT) communicates with a piconet BT (LAN host)
through the access system, because of the different protocol
stacks between the piconet and LAN networks, two issues,
addressing and connection must be resolved in the design.
QUALITY-OF-SERVICE IN IP SERVICES
705
Figure 11. Bluetooth-IP interworking operational scenario.
3.3.1. Address resolution
To achieve the interconnection function in a Bluetooth-IP environment
, the access system must refer to both the piconet
and LAN networks as members. Thus, two different protocol
stacks must be combined to form a new communication protocol
stack suitable as a routing server. The combined protocol
stack is shown in figure 10. Using the protocol stack specifications
, a scenario for addressing is identified as follows:
Step 1: Each LAN host assigns an IP address. Each host thus
possesses two addresses; an IP address and a MAC
address.
Step 2: Each piconet BT acquires two addresses, a BT address
(BD_ADDR) and an IP address.
Step 3: A routing table must be built for interconnection in
the access system.
A lookup for destination addresses
is needed to find the corresponding outbound
BD_ADDR or MAC address to which the packet
must be forwarded.
3.3.2. Connection management
Because the existing LAN is a packet switching service and
BT connections are made on an ad hoc basis, interconnection
is very difficult. To solve this problem, a mechanism based
on IP services over Bluetooth was proposed in the Bluetooth
specification. The system is as follows.
Step 1: Bluetooth adapters and an Ethernet card are embed-ded
into a desktop computer. These adapters and card
are referred to as network attachment units (wired or
Figure 12. The Bluetooth-IP access system.
radio). Each port in the interface is directly attached
to different networks (see figure 12).
Step 2: The API of the Bluetooth adapter and the Ethernet
packet driver are used to design an access system.
The operational procedures for this system follow the
scenario in figure 11.
Performance analysis
The Bluehoc toolkit was used to simulate the various scenarios
in the Bluetooth-IP service environment. As depicted
in figure 13, the data is dumped from the L2CA_DataWrite
706
W.-C. CHAN ET AL.
Figure 13. L2CAP packet flow.
and L2CA_DataRead into the connection state. The
L2CA_DataWrite and L2CA_DataRead events are the upper-Layer
to the L2CAP events. The L2CA_DataRead is the
event that requests transfer for received data from the L2CAP
entity to the upper layer. The L2CA_DataWrite is the event
that requests data transfer from the upper layer to the L2CAP
entity for transmission over an open channel.
In the Bluehoc toolkit the connection procedures such as
inquiry and paging are simulated according to the Bluetooth
specifications. The master sends the QoS parameters, which
depend on the application. QoS parameters are then passed
to the Deficit Round Robin (DRR) based scheduler that determines
if the connection can be accepted by the LMP. The
DRR-based scheduler finds the appropriate ACL link baseband
packet type (DM1, DM3, DM5, DH1, DH3 and DH5)
depending upon the application level MTU and loss sensitivity
. The simulation applications include packetized voice,
Telnet and FTP.
4.2. Simulation results
In figure 14 the average delay for various numbers of slaves
using packetized voice in a piconet is shown. The voice application
is real time and would be sensitive to a loss of several
consecutive small packets. Figure 15 shows the average delay
in the mixed traffic source. The short-packet delay, such
as telnet applications, are significantly increased by the long-packet
in the DRR-based scheduler. An efficient and simple
scheme that does not add to the Bluetooth load is important.
4.2.1. Queue length analysis
The queue length analysis was based on the access system
queue size. We observed the variations in queue length using
specified traffic levels. In figure 16, the queue length of the
traffic from 100 M LAN to 1 M piconet increases very fast.
It reaches 50000 packets within 10 s. The queue length for
the traffic from 10 M LAN to 1 M piconet increases more
smoothly and the queue length of the traffic from 1 M piconet
to 100 M or 10 M LANs is almost zero.
Figure 14. Average delay with voice services.
Figure 15. Average delay in the mixed traffic.
Figure 16. Queue length analysis.
4.2.2. Loss rate analysis
In the loss rate analysis the queue length was changed to observe
the variations in the loss rate. In figure 17, when the
queue length is smaller than 1000 packets, the loss rate is
close to 0.9. When the queue length increases to 5000 packets
, the loss rate decreases to 0.8. Therefore, increasing the
queue length will decrease the traffic loss rate. The loss rate
from 100M LAN to 1 M piconet is a little more than that for
10 M LAN to 1 M piconet because of the difference in the
LAN transport speed.
QUALITY-OF-SERVICE IN IP SERVICES
707
Figure 17. Loss rate analysis.
Figure 18. Throughput analysis.
4.2.3. Throughput analysis
From figure 18, increasing the queue length has no effect on
improving the throughput. The throughput is smooth in both
the LAN to piconet and piconet to LAN traffic.
4.2.4. Delay analysis
From figure 19, when the queue length is 500 packets, the
delay is about 0.0005 seconds per bit. If the queue length increases
to 1000 packets, the delay becomes almost double.
When the queue length reaches 5000 packets, the delay is
close to 0.003 seconds per bit. The transfer delay has no obvious
change when the queue length increases to 10000 packets.
Conclusions
In a wireless environment the QoS guarantee provision becomes
more important. The frequent mobility of a host increases
the service disruption in real-time applications. Even
though efficient RSVP enhances the resource reservation ability
and allows for requesting a specific QoS from the network,
these mechanisms do not have enough QoS guarantee to prevent
service disruption during handoff. In this paper a cascading
mechanism for QoS guarantee in a Bluetooth-IP environment
was proposed. A fast and efficient handoff scheme
that supports BT roaming handoffs between different Access
Points was proposed. Concepts for the mobile RSVP issues
in Bluetooth networks were presented with our fast handoff
mechanism. The Bluetooth-IP access system can be implemented
using available technology such as Network Addressing
Translation (NAT) and Linux Bluetooth Stack to connect
Bluetooth piconets and LAN. In our simulations Bluetooth
required a long time to process the inquiry and paging procedures
. These results show that the connection time is up to
11.84 sec when seven slaves join a piconet at the same time.
Moreover, the access system queue length increases by about
10000 packets per second in a 100 Mbps LAN and about 1000
packets per second in a 10 Mbps LAN when the traffic load is
80%. In the loss rate analysis, the loss rate was close to 90%
when the queue length was less than 1000 packets. However
, when the queue length increased to 10000 packets the
708
W.-C. CHAN ET AL.
Figure 19. Delay analysis.
loss rate decreased to 70%. In the delay analysis, the delay
was about 0.000045 seconds per bit when the queue length
was 500 packets. The delay doubles when the queue length
doubles. However, when the queue length is more than 5000
packets the delay has no obvious variance.
Acknowledgement
This paper is a partial result of project no. NSC-90-2218-E-259
-006 conducted by National Dong Hwa University under
the sponsorship of National Science Council, ROC.
References
[1] M. Albrecht, M. Frank, P. Martini, M. Schetelig, A. Vilavaara and A.
Wenzel, IP service over bluetooth: Leading the way to a new mobility,
in: Proceedings of the International Conference on Local Computer
Networks (1999) pp. 143154.
[2] S. Baatz, M. Frank, R. Gopffarth, D. Kassatkine, P. Martini, M.
Schetelig and A. Vilavaara, Handoff support for mobility with IP over
bluetooth, in: Proceedings of the 25th Annual IEEE Conference on Local
Computer Networks, USA (2000) pp. 143154.
[3] J.L. Chen and K.C. Yen, Transparent bridging support for bluetooth-IP
service interworking, International Journal of Network Management 12
(2002) 379386.
[4] J.L. Chen, A.P. Shu, H.W. Tzeng and P.T. Lin, Fair scheduling for guaranteed
services on personal area networks, in: Proceedings of 2002
International Conference on Communications, Circuits and Systems,
China (2002) pp. 440444.
[5] L. Ching and K.Y. Siu, A bluetooth scatternet formation algorithm, in:
Proceedings of IEEE Global Telecommunications Conference, Vol. 5
(2001) pp. 28642869.
[6] A. Das, A. Ghose, A. Razdan, H. Saran and R. Shorey, Enhancing performance
of asynchronous data traffic over the bluetooth wireless ad-hoc
network, in: Proceedings of the IEEE INFOCOM (2001) pp. 591
600.
[7] D. Famolari and P. Agrawal, Architecture and performance of an em-bedded
IP bluetooth personal area network, in: Proceedings of the International
Conference on Personal Wireless Communications, India
(2000) pp. 7579.
[8] M. Frank, R. Goepffarth, W. Hansmann and U. Mueller, Transmission
of native IPv6 over bluetooth, http://www.ispras.ru/~ipv6/
docs/draft-hansmann-6overbt-00.txt
[9] C. Haartsen and S. Mattisson, Bluetooth a new low-power radio interface
providing short-range connectivity, Proceedings of the IEEE
88(10) (2000) 16511661.
[10] G. Ivano, D. Paolo and F. Paolo, The role of Internet technology in
future mobile data systems, IEEE Communications Magazine 38(11)
(2000) 6873.
[11] P. Johansson, R. Kapoor, M. Kazantzidis and M. Gerla, Rendezvous
scheduling in bluetooth scatternets, in: Proceedings of IEEE International
Conference on Communications, Vol. 1 (2002) pp. 318324.
[12] D.J.Y. Lee and W.C.Y. Lee, Ricocheting bluetooth, in: Proceedings of
the 2nd International Conference on Microwave and Millimeter Wave
Technology (2000) pp. 432435.
[13] D.G. Leeper, A long-term view of short-range wireless, IEEE Computer
34(6) (2001) 3944.
[14] Y.B. Lin and I. Chlamtac, Wireless and Mobile Network Architectures
(Wiley, 2000).
[15] I. Mahadevan and K.M. Sivalingam, An architecture and experimental
results for quality if service in mobile networks using RSVP and CBQ,
ACM/Baltzer Wireless Networks Journal 6(3) (2000) 221234.
[16] B. Raman, P. Bhagwat and S. Seshan, Arguments for cross-layer opti-mizations
in Bluetooth scatternets, in: Proceedings of 2001 Symposium
on Applications and the Internet (2001) pp. 176184.
[17] T.S. Rappaport, Wireless Communications Principles and Practice,
2nd ed. (2002).
[18] T. Salonidis, P. Bhagwat, L. Tassiulas and R. LaMaire, Distributed
topology construction of bluetooth personal area networks, in: Proceedings
of the IEEE INFOCOM (2001) pp. 15771586.
[19] The Bluetooth Special Interest Group, Bluetooth Network Encapsulation
Protocol Specification (2001).
[20] The Bluetooth Special Interest Group, Documentation available at
http://www.bluetooth.com/techn/index.asp
[21] The Bluetooth Special Interest Group,
Quality of service in
bluetooth networking, http://ing.ctit.utwente.nl/WU4/
Documents/Bluetooth_QoS_ING_A_part_I.pdf
[22] V. Zaruba, S. Basagni and I. Chlamtac, Bluetrees-scatternet formation
to enable bluetooth-based ad hoc networks, in: Proceedings of IEEE
International Conference on Communications, Vol. 1 (2001) pp. 273
277.
QUALITY-OF-SERVICE IN IP SERVICES
709
Wah-Chun Chan received the Ph.D. degree from
University of British Columbia in 1965. He is cur-rently
a Visiting Professor in the Department of
Computer Science at National Chiao Tung University
. Dr. Chan's research interest has been in the areas
of queueing theory and telecommunication networks
.
Research on telecommunication networks
has been in the development of models for the performance
analysis of computer communication networks
.
Jiann-Liang Chen received the Ph.D. degree in
electrical engineering from National Taiwan University
, Taipei, Taiwan in 1989. Since August 1997,
he has been with the Department of Computer Science
and Information Engineering of National Dong
Hwa University, where he is a Professor now. His
current research interests are directed at cellular mobility
management and personal communication systems | handoff;quality of service;Bluetooth-IP access system;BNEP protocol;resource allocation |
159 | Web Question Answering: Is More Always Better? | This paper describes a question answering system that is designed to capitalize on the tremendous amount of data that is now available online. Most question answering systems use a wide variety of linguistic resources. We focus instead on the redundancy available in large corpora as an important resource. We use this redundancy to simplify the query rewrites that we need to use, and to support answer mining from returned snippets. Our system performs quite well given the simplicity of the techniques being utilized. Experimental results show that question answering accuracy can be greatly improved by analyzing more and more matching passages. Simple passage ranking and n-gram extraction techniques work well in our system making it efficient to use with many backend retrieval engines. | INTRODUCTION
Question answering has recently received attention from the
information retrieval, information extraction, machine learning,
and natural language processing communities [1][3][19][20] The
goal of a question answering system is to retrieve `answers' to
questions rather than full documents or even best-matching
passages as most information retrieval systems currently do. The
TREC Question Answering Track which has motivated much of
the recent work in the field focuses on fact-based, short-answer
questions such as "Who killed Abraham Lincoln?" or "How tall is
Mount Everest?". In this paper we focus on this kind of question
answering task, although the techniques we propose are more
broadly applicable.
The design of our question answering system is motivated by
recent observations in natural language processing that, for many
applications, significant improvements in accuracy can be attained
simply by increasing the amount of data used for learning.
Following the same guiding principle we take advantage of the
tremendous data resource that the Web provides as the backbone
of our question answering system. Many groups working on
question answering have used a variety of linguistic resources
part-of-speech tagging, syntactic parsing, semantic relations,
named entity extraction, dictionaries, WordNet, etc. (e.g.,
[2][8][11][12][13][15][16]).We chose instead to focus on the
Web as gigantic data repository with tremendous redundancy that
can be exploited for question answering. The Web, which is
home to billions of pages of electronic text, is orders of magnitude
larger than the TREC QA document collection, which consists of
fewer than 1 million documents. This is a resource that can be
usefully exploited for question answering. We view our
approach as complimentary to more linguistic approaches, but
have chosen to see how far we can get initially by focusing on
data per se as a key resource available to drive our system design.
Automatic QA from a single, small information source is
extremely challenging, since there is likely to be only one answer
in the source to any user's question. Given a source, such as the
TREC corpus, that contains only a relatively small number of
formulations of answers to a query, we may be faced with the
difficult task of mapping questions to answers by way of
uncovering complex lexical, syntactic, or semantic relationships
between question string and answer string. The need for anaphor
resolution and synonymy, the presence of alternate syntactic
formulations, and indirect answers all make answer finding a
potentially challenging task. However, the greater the answer
redundancy in the source data collection, the more likely it is that
we can find an answer that occurs in a simple relation to the
question. Therefore, the less likely it is that we will need to resort
to solving the aforementioned difficulties facing natural language
processing systems.
EXPLOITING REDUNDANCY FOR QA
We take advantage of the redundancy (multiple, differently
phrased, answer occurrences) available when considering massive
amounts of data in two key ways in our system.
Enables Simple Query Rewrites.
The greater the number of
information sources we can draw from, the easier the task of
rewriting the question becomes, since the answer is more likely to
be expressed in different manners. For example, consider the
difficulty of gleaning an answer to the question "Who killed
Abraham Lincoln?" from a source which contains only the text
"John Wilkes Booth altered history with a bullet. He will forever
be known as the man who ended Abraham Lincoln's life,"
Question
Rewrite Query
<Search Engine>
Collect Summaries,
Mine N-grams
Filter N-Grams
Tile N-Grams
N-Best Answers
Where is the Louvre
Museum located?
"+the Louvre Museum +is located"
"+the Louvre Museum +is +in"
"+the Louvre Museum +is near"
"+the Louvre Museum +is"
Louvre AND Museum AND near
in Paris France 59%
museums 12%
hostels 10%
Figure 1. System Architecture
Question
Rewrite Query
<Search Engine>
Collect Summaries,
Mine N-grams
Filter N-Grams
Tile N-Grams
N-Best Answers
Where is the Louvre
Museum located?
"+the Louvre Museum +is located"
"+the Louvre Museum +is +in"
"+the Louvre Museum +is near"
"+the Louvre Museum +is"
Louvre AND Museum AND near
in Paris France 59%
museums 12%
hostels 10%
Figure 1. System Architecture
versus a source that also contains the transparent answer string,
"John Wilkes Booth killed Abraham Lincoln."
Facilitates Answer Mining.
Even when no obvious answer
strings can be found in the text, redundancy can improve the
efficacy of question answering. For instance, consider the
question: "How many times did Bjorn Borg win Wimbledon?"
Assume the system is unable to find any obvious answer strings,
but does find the following sentences containing "Bjorn Borg"
and "Wimbledon", as well as a number:
(1) Bjorn Borg blah blah Wimbledon blah blah 5 blah
(2) Wimbledon blah blah blah Bjorn Borg blah 37 blah.
(3) blah Bjorn Borg blah blah 5 blah blah Wimbledon
(4) 5 blah blah Wimbledon blah blah Bjorn Borg.
By virtue of the fact that the most frequent number in these
sentences is 5, we can posit that as the most likely answer.
RELATED WORK
Other researchers have recently looked to the web as a resource
for question answering. The Mulder system described by Kwok
et al. [14] is similar to our approach in several respects. For each
question, Mulder submits multiple queries to a web search engine
and analyzes the results. Mulder does sophisticated parsing of the
query and the full-text of retrieved pages, which is far more
complex and compute-intensive than our analysis. They also
require global idf term weights for answer extraction and
selection, which requires local storage of a database of term
weights. They have done some interesting user studies of the
Mulder interface, but they have not evaluated it with TREC
queries nor have they looked at the importance of various system
components.
Clarke et al. [9][10] investigated the importance of redundancy in
their question answering system. In [9] they found that the best
weighting of passages for question answering involves using both
passage frequency (what they call redundancy) and a global idf
term weight. They also found that analyzing more top-ranked
passages was helpful in some cases and not in others. Their
system builds a full-content index of a document collection, in
this case TREC. In [10] they use web data to reinforce the scores
of promising candidate answers by providing additional
redundancy, with good success. Their implementation requires
an auxiliary web corpus be available for full-text analysis and
global term weighting. In our work, the web is the primary
source of redundancy and we operate without a full-text index of
documents or a database of global term weights.
Buchholz's Shapaqa NLP system [7] has been evaluated on both
TREC and Web collections. Question answering accuracy was
higher with the Web collection (although both runs were poor in
absolute terms), but few details about the nature of the differences
are provided.
These systems typically perform complex parsing and entity
extraction for both queries and best matching web pages ([7][14]),
which limits the number of web pages that they can analyze in
detail. Other systems require term weighting for selecting or
ranking the best-matching passages ([10][14]) and this requires
auxiliary data structures. Our approach is distinguished from
these in its simplicity (simple rewrites and string matching) and
efficiency in the use of web resources (use of only summaries and
simple ranking). We now describe how our system uses
redundancy in detail and evaluate this systematically.
SYSTEM OVERVIEW
A flow diagram of our system is shown in Figure 1. The system
consists of four main components.
Rewrite Query
. Given a question, the system generates a number
of rewrite strings, which are likely substrings of declarative
answers to the question. To give a simple example, from the
question "When was Abraham Lincoln born?" we know that a
likely answer formulation takes the form "Abraham Lincoln was
born on <DATE>". Therefore, we can look through the collection
of documents, searching for such a pattern.
We first classify the question into one of seven categories, each of
which is mapped to a particular set of rewrite rules. Rewrite rule
sets range in size from one to five rewrite types. The output of
the rewrite module is a set of 3-tuples of the form [
string,
L/R/-, weight
], where "
string
" is the reformulated
292
search query, "
L/R/-"
indicates the position in the text where
we expect to find the answer with respect to the query string (to
the left, right or anywhere) and "
weight
" reflects how much we
prefer answers found with this particular query. The idea behind
using a weight is that answers found using a high precision query
(e.g., "Abraham Lincoln was born on") are more likely to be
correct than those found using a lower precision query (e.g.,
"Abraham" AND "Lincoln" AND "born").
We do not use a parser or part-of-speech tagger for query
reformulation, but do use a lexicon in order to determine the
possible parts-of-speech of a word as well as its morphological
variants. We created the rewrite rules and associated weights
manually for the current system, although it may be possible to
learn query-to-answer reformulations and weights (e.g. see
Agichtein et al. [4]; Radev et al. [17]).
The rewrites generated by our system are simple string-based
manipulations. For instance, some question types involve query
rewrites with possible verb movement; the verb "is" in the
question "Where is the Louvre Museum located?" should be
moved in formulating the desired rewrite to "The Louvre Museum
is
located in". While we might be able to determine where to
move a verb by analyzing the sentence syntactically, we took a
much simpler approach. Given a query such as "Where is w
1
w
2
... w
n
", where each of the w
i
is a word, we generate a rewrite for
each possible position the verb could be moved to (e.g. "w
1
is w
2
... w
n
", "w
1
w
2
is ... w
n
", etc). While such an approach results in
many nonsensical rewrites (e.g. "The Louvre is Museum located
in"), these very rarely result in the retrieval of bad pages, and the
proper movement position is guaranteed to be found via
exhaustive search. If we instead relied on a parser, we would
require fewer query rewrites, but a misparse would result in the
proper rewrite not being found.
For each query we also generate a final rewrite which is a backoff
to a simple ANDing of the non-stop words in the query. We
could backoff even further to ranking using a best-match retrieval
system which doesn't require the presence of all terms and uses
differential term weights, but we did not find that this was
necessary when using the Web as a source of data. There are an
average of 6.7 rewrites for the 500 TREC-9 queries used in the
experiments described below.
As an example, the rewrites for the query "Who created the
character of Scrooge?" are:
LEFT_5_"created +the character +of Scrooge"
RIGHT_5_"+the character +of Scrooge +was created
+by"
AND_2_"created" AND "+the character" AND "+of
Scrooge"
AND_1_"created" AND "character" AND "Scrooge"
To date we have used only simple string matching techniques.
Soubbotin and Soubbotin [18] have used much richer regular
expression matching to provide hints about likely answers, with
very good success in TREC 2001, and we could certainly
incorporate some of these ideas in our rewrites. Note that many
of our rewrites require the matching of stop words like "in" and
"the", in the above example. In our system stop words are
important indicators of likely answers, and we do not ignore them
as most ranked retrieval systems do, except in the final backoff
AND rewrite.
The query rewrites are then formulated as search engine queries
and sent to a search engine from which page summaries are
collected and analyzed.
Mine N-Grams
. From the page summaries returned by the search
engine, n-grams are mined. For reasons of efficiency, we use
only the returned summaries and not the full-text of the
corresponding web page. The returned summaries contain the
query terms, usually with a few words of surrounding context. In
some cases, this surrounding context has truncated the answer
string, which may negatively impact results. The summary text is
then processed to retrieve only strings to the left or right of the
query string, as specified in the rewrite triple.
1-, 2-, and 3-grams are extracted from the summaries. An N-gram
is scored according the weight of the query rewrite that retrieved
it. These scores are summed across the summaries that contain
the n-gram (which is the opposite of the usual inverse document
frequency component of document/passage ranking schemes).
We do not count frequency of occurrence within a summary (the
usual tf component in ranking schemes). Thus, the final score for
an n-gram is based on the rewrite rules that generated it and the
number of unique summaries in which it occurred. When
searching for candidate answers, we enforce the constraint that at
most one stopword is permitted to appear in any potential n-gram
answers.
The top-ranked n-grams for the Scrooge query are:
Dickens 117
Christmas Carol 78
Charles Dickens 75
Disney 72
Carl Banks 54
A Christmas 41
uncle 31
Filter/Reweight N-Grams.
Next, the n-grams are filtered and
reweighted according to how well each candidate matches the
expected answer-type, as specified by a handful of handwritten
filters. The system uses filtering in the following manner. First,
the query is analyzed and assigned one of seven question types,
such as who-question, what-question, or how-many-question.
Based on the query type that has been assigned, the system
determines what collection of filters to apply to the set of potential
answers found during n-gram harvesting. The answers are
analyzed for features relevant to the filters, and then rescored
according to the presence of such information.
A collection of approximately 15 filters were developed based on
human knowledge about question types and the domain from
which their answers can be drawn. These filters used surface
string features, such as capitalization or the presence of digits, and
consisted of handcrafted regular expression patterns.
After the system has determined which filters to apply to a pool of
candidate answers, the selected filters are applied to each
candidate string and used to adjust the score of the string. In most
cases, filters are used to boost the score of a potential answer
when it has been determined to possess the features relevant to the
query type. In other cases, filters are used to remove strings from
the candidate list altogether. This type of exclusion was only
performed when the set of correct answers was determined to be a
293
closed set (e.g. "Which continent....?") or definable by a set of
closed properties (e.g. "How many...?").
Tile N-Grams.
Finally, we applied an answer tiling algorithm,
which both merges similar answers and assembles longer answers
out of answer fragments. Tiling constructs longer n-grams from
sequences of overlapping shorter n-grams. For example, "A B C"
and "B C D" is tiled into "A B C D." The algorithm proceeds
greedily from the top-scoring candidate - all subsequent
candidates (up to a certain cutoff) are checked to see if they can
be tiled with the current candidate answer. If so, the higher
scoring candidate is replaced with the longer tiled n-gram, and the
lower scoring candidate is removed. The algorithm stops only
when no n-grams can be further tiled.
The top-ranked n-grams after tiling for the Scrooge query are:
Charles Dickens 117
A Christmas Carol 78
Walt Disney's uncle 72
Carl Banks 54
uncle 31
Our system works most efficiently and naturally with a backend
retrieval system that returns best-matching passages or query-relevant
document summaries. We can, of course, post-process
the full text of matching documents to extract summaries for n-gram
mining, but this is inefficient especially in Web applications
where the full text of documents would have to be downloaded
over the network at query time.
EXPERIMENTS
For our experimental evaluations we used the first 500 TREC-9
queries (201-700) [19]. For simplicity we ignored queries which
are syntactic rewrites of earlier queries (701-893), although
including them does not change the results in any substantive
way. We used the patterns provided by NIST for automatic
scoring. A few patterns were slightly modified to accommodate
the fact that some of the answer strings returned using the Web
were not available for judging in TREC-9. We did this in a very
conservative manner allowing for more specific correct answers
(e.g., Edward J. Smith vs. Edward Smith) but not more general
ones (e.g., Smith vs. Edward Smith), and simple substitutions
(e.g., 9 months vs. nine months). These changes influence the
absolute scores somewhat but do not change relative performance,
which is our focus here.
Many of the TREC queries are time sensitive that is, the correct
answer depends on when the question is asked. The TREC
database covers a period of time more than 10 years ago; the Web
is much more current. Because of this mismatch, many correct
answers returned from the Web will be scored as incorrect using
the TREC answer patterns. 10-20% of the TREC queries have
temporal dependencies. E.g., Who is the president of Bolivia?
What is the exchange rate between England and the U. S.? We
did not modify the answer key to accommodate these time
differences, because this is a subjective job and would make
comparison with earlier TREC results impossible.
For the main Web retrieval experiments we used Google as a
backend because it provides query-relevant summaries that make
our n-gram mining techniques more efficient. Thus we have
access to more than 2 billion web pages. For some experiments in
TREC retrieval we use the standard QA collection consisting of
news documents from Disks 1-5. The TREC collection has just
under 1 million documents [19].
All runs are completely automatic, starting with queries and
generating a ranked list of 5 candidate answers. Candidate
answers are a maximum of 50 bytes long, and typically much
shorter than that. We report the Mean Reciprocal Rank (MRR) of
the first correct answer, the Number of Questions Correctly
Answered (NumCorrect), and the Proportion of Questions
Correctly Answered (PropCorrect). Correct answers at any rank
are included in the number and proportion correct measures.
Most correct answers are at the top of the list -- 70% of the correct
answers occur in the first position and 90% in the first or second
positions.
Using our system with default settings for query rewrite weights,
number of summaries returned, etc. we obtain a MRR of 0.507
and answer 61% of the queries. The average answer length was
12 bytes, so the system is really returning short answers not
passages. This is very good performance and would place us near
the top of 50-byte runs for TREC-9. However, since we did not
take part in TREC-9 it is impossible to compare our results
precisely with those systems (e.g., we used TREC-9 for tuning our
TREC-10 system increasing our score somewhat, but we return
several correct answers that were not found in TREC-9 thus
decreasing our score somewhat).
Redundancy is used in two key ways in our data-driven approach.
First, the occurrence of multiple linguistic formulations of the
same answers increases the chances of being able to find an
answer that occurs within the context of a simple pattern match
with the query. Second, answer redundancy facilitates the process
of answer extraction by giving higher weight to answers that
occur more often (i.e., in more different document summaries).
We now evaluate the contributions of these experimentally.
5.1
Number of Snippets
We begin by examining the importance of redundancy in answer
extraction. To do this we vary the number of summaries
(snippets) that we get back from the search engine and use as
input to the n-gram mining process. Our standard system uses
100 snippets. We varied the number of snippets from 1 to 1000.
The results are shown in Figure 2.
Performance improves sharply as the number of snippets increases
from 1 to 50 (0.243 MRR for 1 snippet, 0.370 MRR for 5, 0.423
MRR for 10, and 0.501 for 50), somewhat more slowly after that
0
0.1
0.2
0.3
0.4
0.5
0.6
1
10
100
1000
Num ber of Snippets
MR
R
Figure 2. MRR as a function of number of
snippets returned. TREC-9, queries 201-700.
294
(peaking 0.514 MRR with 200 snippets), and then falling off
somewhat after that as more snippets are included for n-gram
analysis. Thus, over quite a wide range, the more snippets we
consider in selecting and ranking n-grams the better. We believe
that the slight drop at the high end is due to the increasing
influence that the weaker rewrites have when many snippets are
returned. The most restrictive rewrites return only a few matching
documents. Increasing the number of snippets increases the
number of the least restrictive matches (the AND matches), thus
swamping the restrictive matches. In addition, frequent n-grams
begin to dominate our rankings at this point.
An example of failures resulting from too many AND matches is
Query 594: What is the longest word in the English language?
For this query, there are 40 snippets matching the rewrite "is the
longest word in the English language" with weight 5, 40 more
snippets matching the rewrite "the longest word in the English
language is" with the weight 5, and more than 100 snippets
matching the backoff AND query ("longest" AND "word" AND
"English" AND "language") with a weight of 1. When 100
snippets are used, the precise rewrites contribute almost as many
snippets as the AND rewrite. In this case we find the correct
answer, "pneumonoultramicroscopicsilicovolcanokoniosis", in the
second rank. The first answer, "1909 letters long", which is
incorrect, also matches many precise rewrites such as "the longest
word in English is ## letters long", and we pick up on this.
When 1000 snippets are used, the weaker AND rewrites dominate
the matches. In this case, the correct answer falls to seventh on
the list after "letters long", "one syllable", "is screeched", "facts",
"stewardesses" and "dictionary", all of which occur commonly in
results from the least restrictive AND rewrite. A very common
AND match contains the phrase "the longest one-syllable word in
the English language is screeched", and this accounts for two of
our incorrect answers.
Using differential term weighting of answer terms, as many
retrieval systems do, should help overcome this problem to some
extent but we would like to avoid maintaining a database of global
term weights. Alternatively we could refine our weight
accumulation scheme to dampen the effects of many weakly
restrictive matches by sub-linear accumulation, and we are
currently exploring several alternatives for doing this.
Our main results on snippet redundancy are consistent with those
reported by Clarke et al. [9], although they worked with the much
smaller TREC collection. They examined a subset of the TREC-9
queries requiring a person's name as the answer. They varied the
number of passages retrieved (which they call depth) from 25 to
100, and observed some improvements in MRR. When the
passages they retrieved were small (250 or 500 bytes) they found
improvement, but when the passages were larger (1000 or 2000
bytes) no improvements were observed. The snippets we used
are shorter than 250 bytes, so the results are consistent. Clarke et
al. [9] also explored a different notion of redundancy (which they
refer to as c
i
). c
i
is the number of different passages that match an
answer. Their best performance is achieved when both c
i
and
term weighting are used to rank passages. We too use the number
of snippets that an n-gram occurs in. We do not, however, use
global term weights, but have tried other techniques for weighting
snippets as described below.
5.2
TREC vs. Web Databases
Another way to explore the importance of redundancy is to run
our system directly on the TREC documents. As noted earlier,
there are three orders of magnitude more documents on the Web
than in the TREC QA collection. Consequently, there will be
fewer alternative ways of saying the same thing and fewer
matching documents available for mining the candidate n-grams.
We suspect that this lack of redundancy will limit the success of
our approach when applied directly on TREC documents.
While corpus size is an obvious and important difference between
the TREC and Web collections there are other differences as well.
For example, text analysis, ranking, and snippet extraction
techniques will all vary somewhat in ways that we can not control.
To better isolate the size factor, we also ran our system against
another Web search engine.
For these experiments we used only the AND rewrites and looked
at the first 100 snippets. We had to restrict ourselves to AND
rewrites because some of the search engines we used do not
support the inclusion of stop words in phrases, e.g., "created +the
character +of Scrooge".
5.2.1
TREC Database
The TREC QA collection consists of just under 1 million
documents. We expect much less redundancy here compared to
the Web, and suspect that this will limit the success of our
approach. An analysis of the TREC-9 query set (201-700) shows
that no queries have 100 judged relevant documents. Only 10 of
the 500 questions have 50 or more relevant documents, which the
results in Figure 2 suggest are required for the good system
performance. And a very large number of queries, 325, have
fewer than 10 relevant documents.
We used an Okapi backend retrieval engine for the TREC
collection. Since we used only Boolean AND rewrites, we did
not take advantage of Okapi's best match ranking algorithm.
However, most queries return fewer than 100 documents, so we
wind up examining most of the matches anyway.
We developed two snippet extraction techniques to generate
query-relevant summaries for use in n-gram mining. A
Contiguous technique returned the smallest window containing all
the query terms along with 10 words of context on either side.
Windows that were greater than 500 words were ignored. This
approach is similar to passage retrieval techniques albeit without
differential term weighting. A Non-Contiguous technique
returned the union of two word matches along with 10 words of
context on either side. Single words not previously covered are
included as well. The search engine we used for the initial Web
experiments returns both contiguous and non-contiguous snippets.
Figure 3 shows the results of this experiment.
MRR
NumCorrect PropCorrect
Web1
0.450
281
0.562
TREC, Contiguous Snippet
0.186
117
0.234
TREC, Non-Contiguous Snippet
0.187
128
0.256
AND Rewrites Only, Top 100
Figure 3: Web vs. TREC as data source
295
Our baseline system using all rewrites and retrieving 100 snippets
achieves 0.507 MRR (Figure 2). Using only the AND query
rewrites results in worse performance for our baseline system with
0.450 MRR (Figure 3). More noticeable than this difference is
the drop in performance of our system using TREC as a data
source compared to using the much larger Web as a data source.
MRR drops from 0.450 to 0.186 for contiguous snippets and
0.187 for non-contiguous snippets, and the proportion of
questions answered correctly drops from 56% to 23% for
contiguous snippets and 26% for non-contiguous snippets. It is
worth noting that the TREC MRR scores would still place this
system in the top half of the systems for the TREC-9 50-byte task,
even though we tuned our system to work on much larger
collections. However, we can do much better simply by using
more data. The lack of redundancy in the TREC collection
accounts for a large part of this drop off in performance. Clarke et
al. [10] have also reported better performance using the Web
directly for TREC 2001 questions.
We expect that the performance difference between TREC and the
Web would increase further if all the query rewrites were used.
This is because there are so few exact phrase matches in TREC
relative to the Web, and the precise matches improve performance
by 13% (0.507 vs. 0.450).
We believe that database size per se (and the associated
redundancy) is the most important difference between the TREC
and Web collections. As noted above, however, there are other
differences between the systems such as text analysis, ranking,
and snippet extraction techniques. While we can not control the
text analysis and ranking components of Web search engines, we
can use the same snippet extraction techniques. We can also use a
different Web search engine to mitigate the effects of a specific
text analysis and ranking algorithms.
5.2.2
Another Web Search Engine
For these experiments we used the MSNSearch search engine. At
the time of our experiments, the summaries returned were
independent of the query. So we retrieved the full text of the top
100 web pages and applied the two snippet extraction techniques
described above to generate query-relevant summaries. As before,
all runs are completely automatic, starting with queries, retrieving
web pages, extracting snippets, and generating a ranked list of 5
candidate answers. The results of these experiments are shown in
Figure 4. The original results are referred to as Web1 and the
new results as Web2.
MRR
NumCorrect PropCorrect
Web1
0.450
281
0.562
TREC, Contiguous Snippet
0.186
117
0.234
TREC, Non-Contiguous Snippet
0.187
128
0.256
Web2, Contiguous Snippet
0.355
227
0.454
Web2, Non-Contiguous Snippet
0.383
243
0.486
AND Rewrites Only, Top 100
Figure 4: Web vs. TREC as data source
The Web2 results are somewhat worse than the Web1 results, but
this is expected given that we developed our system using the
Web1 backend, and did not do any tuning of our snippet
extraction algorithms. In addition, we believe that the Web2
collection indexed somewhat less content than Web1 at the time
of our experiments, which should decrease performance in and of
itself. More importantly, the Web2 results are much better than
the TREC results for both snippet extraction techniques, almost
doubling MRR in both cases. Thus, we have shown that QA is
more successful using another large Web collection compared to
the small TREC collection. The consistency of this result across
Web collections points to size and redundancy as the key factors.
5.2.3
Combining TREC and Web
Given that the system benefits from having a large text collection
from which to search for potential answers, then we would expect
that combining both the Web and TREC corpus would result in
even greater accuracy. We ran two experiments to test this.
Because there was no easy way to merge the two corpora, we
instead combined the output of QA system built on each corpus.
For these experiments we used the original Web1 system and our
TREC system. We used only the AND query rewrites, looked at
the Top1000 search results for each rewrite, and used a slightly
different snippet extraction technique. For these parameter
settings, the base TREC-based system had a MRR of .262, the
Web-based system had a MRR of .416.
First, we ran an oracle experiment to assess the potential gain that
could be attained by combining the output of the Web-based and
TREC-based QA systems. We implemented a "switching oracle",
which decides for each question whether to use the output from
the Web-based QA system or the TREC-based QA system, based
upon which system output had a higher ranking correct answer.
The switching oracle had a MRR of .468, a 12.5% improvement
over the Web-based system. Note that this oracle does not
precisely give us an upper bound, as combining algorithms (such
as that described below) could re-order the rankings of outputs.
Next, we implemented a combining algorithm that merged the
outputs from the TREC-based and Web-based systems, by having
both systems vote on answers, where the vote is the score
assigned to a particular answer by the system. For voting, we
defined string equivalence such that if a string X is a substring of
Y, then a vote for X is also a vote for Y. The combined system
achieved a MRR of .433 (a 4.1% improvement over the Web-based
system) and answered 283 questions correctly.
5.3
Snippet Weighting
Until now, we have focused on the quantity of information
available and less on its quality. Snippet weights are used in
ranking n-grams. An n-gram weight is the sum of the weights for
all snippets in which that n-gram appears.
Each of our query rewrites has a weight associated with it
reflecting how much we prefer answers found with this particular
query. The idea behind using a weight is that answers found
using a high precision query (e.g., "Abraham Lincoln was born
on") are more likely to be correct than those found using a lower
precision query (e.g., "Abraham" AND "Lincoln" AND "born").
Our current system has 5 weights.
These rewrite weights are the only source of snippet weighting in
our system. We explored how important these weight are and
considered several other factors that could be used as additional
sources of information for snippet weighting. Although we
specify Boolean queries, the retrieval engine can provide a
ranking, based on factors like link analyses, proximity of terms,
296
location of terms in the document, etc. So, different weights can
be assigned to matches at different positions in the ranked list.
We also looked at the number of matching terms in the best fixed
width window, and the widow size of the smallest matching
passage as indicators of passage quality.
Rewrite Wts uses our heuristically determined rewrite weights as a
measure the quality of a snippet. This is the current system
default. Equal Wts gives equal weight to all snippets regardless of
the rewrite rule that generated them. To the extent that more
precise rewrites retrieve better answers, we will see a drop in
performance when we make all weights equal. Rank Wts uses the
rank of the snippet as a measure of its quality, SnippetWt = (100-rank
)/100. NMatch Wts uses the number of matching terms in a
fixed-width window as the measure of snippet quality. Length
Wts uses a measure of the length of the snippet needed to
encompass all query terms as the measure of snippet quality. We
also look at combinations of these factors. For example,
Rewrite+Rank Wts uses both rewrite weight and rank according to
the following formula, SnippetWt = RewriteScore + (100-rank
)/100. All of these measures are available from query-relevant
summaries returned by the search engine and do not
require analyzing the full text of the document. The results of
these experiments are presented in Figure 4.
Weighting
MRR
NumCorrect PropCorrect
Equal Wts
0.489
298
0.596
Rewrite Wts (Default)
0.507
307
0.614
Rank Wts
0.483
292
0.584
Rewrite + Rank Wts
0.508
302
0.604
NMatch Wts
0.506
301
0.602
Length Wts
0.506
302
0.604
Figure 5: Snippet Weighting
Our current default 5-level weighting scheme which reflects the
specificity of the query rewrites does quite well. Equal weighting
is somewhat worse, as we expected. Interestingly search engine
rank is no better for weighting candidate n-grams than equal
weighting. None of the other techniques we looked at surpasses
the default weights in both MRR and PropCorrect. Our heuristic
rewrite weights provide a simple and effective technique for
snippet weighting, that can be used with any backend retrieval
engine.
Most question answering systems use IR-based measures of
passage quality, and do not typically evaluate the best measure of
similarity for purposes of extracting answers. Clarke et al. [9]
noted above is an exception. Soubbotin and Soubbotin [18]
mention different weights for different regular expression
matches, but they did not describe the mechanism in detail nor did
they evaluate how useful it is. Harabagiu et al. [11] have a kind
of backoff strategy for matching which is similar to weighting, but
again we do not know of parametric evaluations of its importance
in their overall system performance. The question of what kinds
of passages can best support answer mining for question
answering as opposed to document retrieval is an interesting one
that we are pursuing.
DISCUSSION AND FUTURE DIRECTIONS
The design of our question answering system was motivated by
the goal of exploiting the large amounts of text data that is
available on the Web and elsewhere as a useful resource. While
many question answering systems take advantage of linguistic
resources, fewer depend primarily on data. Vast amounts of data
provide several sources of redundancy that our system capitalizes
on. Answer redundancy (i.e., multiple, differently phrased,
answer occurrences) enables us to use only simple query rewrites
for matching, and facilitates the extraction of candidate answers.
We evaluated the importance of redundancy in our system
parametrically. First, we explored the relationship between the
number of document snippets examined and question answering
accuracy. Accuracy improves sharply as the number of snippets
included for n-gram analysis increases from 1 to 50, somewhat
more slowly after that peaking at 200 snippets, and then falls off
somewhat after that. More is better up to a limit. We believe that
we can increase this limit by improving our weight accumulation
algorithm so that matches from the least precise rewrites do not
dominate. Second, in smaller collections (like TREC), the
accuracy of our system drops sharply, although it is still quite
reasonable in absolute terms. Finally, snippet quality is less
important to system performance than snippet quantity. We have
a simple 5-level snippet weighting scheme based on the specificity
of the query rewrite, and this appears to be sufficient. More
complex weighting schemes that we explored were no more
useful.
The performance of our system shows promise for approaches to
question answering which makes use of very large text databases
even with minimal natural language processing. Our system does
not need to maintain its own index nor does it require global term
weights, so it can work in conjunction with any backend retrieval
engine. Finally, since our system does only simple query
transformations and n-gram analysis, it is efficient and scalable.
One might think that our system has limited applicability, because
it works best with large amounts of data. But, this isn't
necessarily so. First, we actually perform reasonably well in the
smaller TREC collection, and could perhaps tune system
parameters to work even better in that environment. More
interestingly, Brill et al. [6] described a projection technique that
can be used to combine the wealth of data available on the Web
with the reliability of data in smaller sources like TREC or an
encyclopedia. The basic idea is to find candidate answers in a
large and possibly noisy source, and then expand the query to
include likely answers. The expanded queries can then be used
on smaller but perhaps more reliable collections either directly
to find support for the answer in the smaller corpus, or indirectly
as a new query which is issued and mined as we currently do.
This approach appears to be quite promising. Our approach
seems least applicable in applications that involve a small amount
of proprietary data. In these cases, one might need to do much
more sophisticated analyses to map user queries to the exact
lexical form that occur in the text collection rather than depend on
primarily on redundancy as we have done.
Although we have pushed the data-driven perspective, more
sophisticated language analysis might help as well by providing
more effective query rewrites or less noisy data for mining.
297
Most question answering systems contain aspects of both we use
some linguistic knowledge in our small query typology and
answer filtering, and more sophisticated systems often use simple
pattern matching for things like dates, zip codes, etc.
There are a number of open questions that we hope to explore. In
the short term, we would like to look systematically at the
contributions of other system components. Brill et al. [5] have
started to explore individual components in more detail, with
interesting results. In addition, it is likely that we have made
several sub-optimal decisions in our initial implementation (e.g.,
omitting most stop words from answers, simple linear
accumulation of scores over matching snippets) that we would
like to improve. Most retrieval engines have been developed
with the goal of finding topically relevant documents. Finding
accurate answers may require somewhat different matching
infrastructure. We are beginning to explore how best to generate
snippets for use in answer mining. Finally, time is an interesting
issue. We noted earlier how the correct answer to some queries
changes over time. Time also has interesting implications for
using redundancy. For example, it would take a while for a news
or Web collection to correctly answer a question like "Who is the
U. S. President?" just after an election.
An important goal of our work is to get system designers to treat
data as a first class resource that is widely available and
exploitable. We have made good initial progress, and there are
several interesting issues remaining to explore.
REFERENCES
[1]
AAAI Spring Symposium Series (2002). Mining Answers
from Text and Knowledge Bases.
[2]
S. Abney, M. Collins and A. Singhal (2000). Answer
extraction. In Proceedings of the 6
th
Applied Natural
Language Processing Conference (ANLP 2000), 296-301.
[3]
ACL-EACL (2002). Workshop on Open-domain Question
Answering.
[4]
E. Agichtein, S. Lawrence and L. Gravano (2001). Learning
search engine specific query transformations for question
answering. In Proceedings of the 10
th
World Wide Web
Conference (WWW10), 169-178.
[5]
E. Brill, S. Dumais and M. Banko (2002). An analysis of the
AskMSR question-answering system. In Proceedings of
2002 Conference on Empirical Methods in Natural
Language Processing (EMNLP 2002).
[6]
E. Brill, J. Lin, M. Banko, S. Dumais and A. Ng (2002).
Data-intensive question answering. In Proceedings of the
Tenth Text REtrieval Conference (TREC 2001).
[7]
S. Buchholz (2002). Using grammatical relations, answer
frequencies and the World Wide Web for TREC question
answering. In Proceedings of the Tenth Text REtrieval
Conference (TREC 2001).
[8]
J. Chen, A. R. Diekema, M. D. Taffet, N. McCracken, N. E.
Ozgencil, O. Yilmazel, E. D. Liddy (2002). Question
answering: CNLP at the TREC-10 question answering track.
In Proceedings of the Tenth Text REtrieval Conference
(TREC 2001).
[9]
C. Clarke, G. Cormack and T. Lyman (2001). Exploiting
redundancy in question answering. In Proceedings of the
24
th
Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval
(SIGIR'2001), 358-365.
[10]
C. Clarke, G. Cormack and T. Lynam (2002). Web
reinforced question answering. In Proceedings of the Tenth
Text REtrieval Conference (TREC 2001).
[11]
S. Harabagiu, D. Moldovan, M. Pasca, R. Mihalcea, M.
Surdeanu, R. Bunescu, R. Girju, V. Rus and P. Morarescu
(2001). FALCON: Boosting knowledge for question
answering. In Proceedings of the Ninth Text REtrieval
Conference (TREC-9), 479-488.
[12]
E. Hovy, L. Gerber, U. Hermjakob, M. Junk and C. Lin
(2001). Question answering in Webclopedia. In
Proceedings of the Ninth Text REtrieval Conference (TREC-9
), 655-664.
[13]
E. Hovy, U. Hermjakob and C. Lin (2002). The use of
external knowledge in factoid QA. In Proceedings of the
Tenth Text REtrieval Conference (TREC 2001).
[14]
C. Kwok, O. Etzioni and D. Weld (2001). Scaling question
answering to the Web. In Proceedings of the 10
th
World
Wide Web Conference (WWW'10), 150-161.
[15]
M. A. Pasca and S. M. Harabagiu (2001). High performance
question/answering. In Proceedings of the 24
th
Annual
International ACM SIGIR Conference on Research and
Development in Information Retrieval (SIGIR'2001), 366-374
.
[16]
J. Prager, E. Brown, A. Coden and D. Radev (2000).
Question answering by predictive annotation. In
Proceedings of the 23
rd
Annual International ACM SIGIR
Conference on Research and Development in Information
Retrieval (SIGIR'2000), 184-191.
[17]
D. R. Radev, H. Qi, Z. Zheng, S. Blair-Goldensohn, Z.
Zhang, W. Fan and J. Prager (2001). Mining the web for
answers to natural language questions. In Proceeding of the
2001 ACM CIKM: Tenth International Conference on
Information and Knowledge Management, 143-150
[18]
M. M. Soubbotin and S. M. Soubbotin (2002). Patterns and
potential answer expressions as clues to the right answers. In
Proceedings of the Tenth Text REtrieval Conference (TREC
2001).
[19]
E. Voorhees and D. Harman, Eds. (2001). Proceedings of
the Ninth Text REtrieval Conference (TREC-9). NIST
Special Publication 500-249.
[20]
E. Voorhees and D. Harman, Eds. (2002). Proceedings of
the Tenth Text REtrieval Conference (TREC 2001). ). NIST
Special Publication 500-250.
298 | rewrite query;n-gram extraction techniques;automatic QA;Experimentation;information extraction;Algorithms;question answering system;redundancy in large corpora;facilitates answer mining;natural language processing;information retrieval;machine learning;TREC QA;simple passage ranking |
16 | A Programming Languages Course for Freshmen | Programming languages are a part of the core of computer science. Courses on programming languages are typically offered to junior or senior students, and textbooks are based on this assumption. However, our computer science curriculum offers the programming languages course in the first year. This unusual situation led us to design it from an untypical approach. In this paper, we first analyze and classify proposals for the programming languages course into different pure and hybrid approaches. Then, we describe a course for freshmen based on four pure approaches, and justify the main choices made. Finally, we identify the software used for laboratories and outline our experience after teaching it for seven years. | INTRODUCTION
The topic of programming languages is a part of the core of
computer science. It played a relevant role in all the curricula
recommendations delivered by the ACM or the IEEE-CS since
the first Curriculum'68 [2]. Recent joint curricular
recommendations of the ACM and the IEEE-CS identified several
"areas" which structure the body of knowledge of the discipline.
The list of areas has grown since the first proposal made by the
Denning Report [4] up to 14 in the latest version, Computing
Curricula 2001 [12]. Programming languages has always been
one of these areas.
Internationally reputed curricular recommendations are a valuable
tool for the design of particular curricula. However, each country
has specific features that constrain the way of organizing their
studies. In Spain, the curriculum of a discipline offered by a
university is the result of a trade-off. On the one hand, the
university must at least offer a number of credits of the core
subject matters established by the Government. On the other
hand, the university may offer supplementary credits of the core
as well as mandatory and optional courses defined according to
the profile of the University and the faculty. Any proposal of a
new curriculum follows a well-established process: (1) the
curriculum is designed by a Center after consulting the
departments involved; (2) it must be approved by the University
Council; (3) the Universities Council of the Nation must deliver a
(positive) report; and (4) the curriculum is published in the
Official Bulletin of the Nation. This scheme has a number of
advantages, e.g. a minimum degree of coherence among all the
universities is guaranteed. However, it also has a number of
disadvantages, e.g. the process to change a curriculum is very
rigid.
The Universidad Rey Juan Carlos is a young university, now
seven years old. It offered studies of computer science since the
very first year. The curriculum was designed by an external
committee, so the teachers of computer science thereafter hired by
the university did not have the opportunity to elaborate on it. The
curriculum had a few weak points that would recommend a light
reform, but the priorities of the new university postponed it.
The curriculum establishes the features of the "Foundations of
programming languages" course. The course is scheduled to last
for fifteen weeks, with three lecture hours per week and two
supervised laboratory hours per week. However, some flexibility
is allowed, so that some weeks may be released from the lab
component.
This course is both a strong and a weak feature of the curriculum.
It is a strong feature because programming languages are
marginal in the official core. Consequently, our curriculum is
closer to international recommendations than most Spanish
universities. However, it is a weak feature, because the course is
offered in the second semester of the first year! Notice that the
programming languages course is more typically offered as an
intermediate or advanced course in the third or fourth year.
Our problem was how to teach the programming languages course
to freshmen. The paper presents our design of the course and our
experience. In the second section we first analyze and classify
proposals for the programming languages course into different
pure and hybrid approaches. In section 3, we describe a course for
freshmen based on four pure approaches, and justify the choices
made with respect to the factors that most influenced its design.
Finally, we identify the software used for laboratories and outline
our experience after teaching it for seven years.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
ITiCSE'05, June 2729, 2005, Monte de Caparica, Portugal.
Copyright 2005 ACM 1-59593-024-8/05/0006...$5.00.
271
APPROACHES TO TEACHING PROGRAMMING LANGUAGES
Since Curriculum'68, several issues on programming languages
have received attention in the different curriculum
recommendations: particular programming languages, language
implementation, etc. It is formative to study (or to browse, at
least) such recommendations, even though the large number of
topics can be discouraging for the teacher.
In this section, we try to organize different contributions. Firstly,
we identify "pure" approaches to the programming languages
course. Secondly, we describe their implementation, usually as
hybrid approaches. Finally, we briefly discuss the issue of which
programming languages and paradigms to use for the course.
2.1 Pure Approaches
Probably, the best study on approaches to the programming
languages course was given by King [7]. He made a study of 15
textbooks and found 10 different goals. Furthermore, he identified
3 approaches on which these textbooks were based and discussed
a number of issues. We have extended his classification up to 5
approaches. Although most books and courses follow a hybrid
approach, it is clarifying to distinguish the following pure ones:
Catalogue approach. It provides a survey of several
programming languages. This approach has several
advantages: the student acquires a good education in several
languages, it allows studying the interaction among different
constructs of a language, and the languages may be studied in
chronological order. However, it also exhibits disadvantages:
there is much redundancy in studying similar features in
different languages, and there is no guarantee that the student
acquires a solid education.
Descriptive (or anatomic [5]) approach. Programming
languages have many common elements which can be
grouped into categories and studied independently. Typical
examples are: data types, hiding mechanisms, execution
control, etc. The advantages and disadvantages of this
approach are roughly the opposite of the previous one.
Paradigm approach. Although each language has different
characteristics and constructs, it is based on a basic model of
computation called programming paradigm. The paradigm
approach is an evolution of the descriptive approach described
above since it generalizes language constructs and groups
them consistently. Typical examples of paradigms are
functional, logic and imperative programming.
Formal approach. It studies the foundations of programming
languages, mainly their syntax and semantics. The main
advantage of this approach is that the student acquires a solid
conceptual background. However, it has the risk of being too
formal and therefore keeping far from the study of the
programming languages themselves.
Implementation approach. It comprises language processing
topics. This approach is usually adopted jointly with the
descriptive one, so that the run-time mechanisms that support
each language construct are also described. This allows
estimating the computational cost of each construct. However,
the student may associate each concept with a particular
implementation; this approach may also be in contradiction
with the idea that a high level language should be
understandable independently from its implementation.
2.2 Implementation of Pure Approaches
The formal and implementation approaches are the basis of two
well known and established courses: computation theory and
language processors. They are not studied in this paper as
standalone courses, but we consider their integration into the
programming languages course.
Despite these "pure" approaches, it is more common to adopt a
hybrid one, formed by a combination of several approaches. For
instance, we have explained that a descriptive course may address
the implementation of language constructs. The descriptive and
paradigm approaches also are commonly complemented by a
small catalogue of selected languages. It is also common to find a
descriptive part based on the imperative paradigm, followed by a
second part based on the catalogue or the paradigm approaches.
Finally, the formal approach can complement the descriptive or
implementation ones.
From a historical point of view, the catalogue approach was the
most common in the first years of computing curricula. However,
it has almost universally been abandoned, with some interesting
exceptions, such as the experience by Maurer [9] on a subfamily
of four C-like languages.
The trend has been towards giving more importance to the
foundations of programming languages, mainly elements and
paradigms. After this evolution, it seems that the two most
common organizations are:
Descriptive approach, complemented with some languages or
paradigms. It is the most common organization, according to
currently available textbooks.
Descriptive approach, illustrated by means of interpreters of
some selected languages. Interpreters and paradigms can be
combined in two symmetrical ways: either implementing an
interpreter in each paradigm [14], or implementing an
interpreter for one language of each paradigm; the latter
approach can be adopted with an imperative language [6] or,
more probably, with a functional language [1].
There are few proposals for a holistic approach. One exception is
Wick and Stevenson's proposal [17], which combines the
descriptive, formal, paradigm and implementation approaches into
a "reductionistic" one.
2.3 Choice of Languages and Paradigms
Courses on programming languages do not simply consist in the
study of one or several languages, but their study usually is a part
of the course. The selection of these languages rises some
questions, that we cannot discuss here.
A related issue is the selection of programming paradigms. Not all
the paradigms can be equally useful for a course on programming
languages for freshmen. Firstly, some paradigms are richer to
illustrate language elements than others. Secondly, some
paradigms can be more adequate to freshmen than others. There is
not a catalogue of paradigms classified by suitable academic year,
but it is important to use a more objective criterion than just the
personal opinion of faculty.
272
We have used the Computing Curricula 2001 [12] as an objective
basis to identify feasible paradigms. They identify the paradigms
that have succeeded in CS1: procedural, object-oriented,
functional, algorithmic notations, and low-level languages. The
last two choices are useful for CS1 but not for a programming
languages course, thus the remaining choices are procedural,
object-oriented, and functional. An additional choice, not cited by
Computing Curricula 2001, consists in the use of tiny languages
[8][10]. These languages can be ad hoc designed for a specific
domain or embedded into operating systems or applications.
OUR PROPOSAL
We discarded the catalogue approach because it would only
contribute to a memorization effort by freshmen. Consequently,
our course is based on the remaining four approaches:
Formal approach. Formal grammars and syntax notations are
given in depth. Language semantics is simply introduced.
Implementation approach. Only basic concepts are given.
Descriptive approach. Basic language elements are reviewed.
Paradigm approach. A programming paradigm is given in
depth (functional programming) but others are just sketched.
Table 1 contains the contents of the course we offer.
There are several major factors to consider for the design of a
programming languages course. Firstly, the course when it is
offered to students determines to a large extent the knowledge and
maturity of students. Secondly, the existence of related courses,
such as courses on automata theory or programming
methodology, may recommend removing some overlapping
topics. Thirdly, the specialization profile of faculty can foster the
choice of a given topic instead of another one. Fourthly, a course
with so many different topics must guarantee coherence among
them. Finally, time constraints typically limit the number of
topics to consider.
In the following subsections we justify the adequacy of the
choices made with respect to these factors.
Table 1. Syllabus of our programming languages course
PART I. INTRODUCTION
Chapter 1. General issues
Computer and programming languages. Elements, properties
and history of programming languages. Classifications.
PART II. SYNTACTIC FOUNDATIONS OF
PROGRAMMING LANGUAGES
Chapter 2. Grammars and formal languages
Alphabets, symbols and chains. Languages. Grammars.
Derivation of sentences. Recursive grammars. Classification
of grammars: Chomsky's hierarchy. Abstract machines.
Chapter 3. Regular grammars
Definition. Uses and limitations. Regular expressions and
regular languages. Finite-state automata.
Chapter 4. Context-free grammars
Definition. Uses and limitations. Parsing trees. The ambiguity
problem and its removal.
PART III. DESCRIPTION AND PROCESSING OF
PROGRAMMING LANGUAGES
Chapter 5. Language processors
Abstract (or virtual) machines. Classes of processors. Stages
in language processing. Concrete vs. abstract syntax.
Chapter 6. Lexical and syntactic notations
Lexical and syntactic elements. Regular definitions. Syntax
notations: BNF, EBNF, syntax charts.
Chapter 7. Semantics
Semantics. Classes of semantics. Static semantics. Binding.
PART IV. THE FUNCTIONAL PARADIGM
Chapter 8. Basic elements
Function definition and application. Programs and
expressions. Overview of functional languages. Built-in types,
values and operations. The conditional expression.
Chapter 9. Advanced elements
Operational semantics: term rewriting. Recursive functions.
Local definitions.
Chapter 10. Functional data types
Constructors. Equations and patterns. Pattern matching.
PART V. ELEMENTS OF PROGRAMMING LANGUAGES
Chapter 11. Lexical and syntactical elements
Identifiers. Numbers. Characters. Comments. Delimeters.
Notations for expressions. Program structure and blocks.
Chapter 12. Data types
Recursive types. Parametric types: polymorphism.
Polymorphic functions. Type systems. Type checking and
inference. Type equivalence. Type conversion. Overloading.
PART VI. PROGRAMMING PARADIGMS
Chapter 13. Other paradigms and languages
Imperative paradigms. Logic paradigm. Motivation of
concurrency. Other computer languages: mark-up languages.
3.1 Maturity of Students
One major concern was the fact that the course is offered to
freshmen. A single approach could not be used because of the
freshmen's lack of knowledge of programming languages. A
variety of contents from the different approaches must be selected
in order to give them a comprehensible and rich view of
programming languages.
The lack of maturity and capability of students to understand
certain topics was a bottleneck for course organization. The
different topics can be given with varying degrees of depth, but
always making sure that freshmen can master them. We found
that some topics are especially difficult to understand, even
formulated in the simplest way. This mainly applies to:
Semantics of programming languages.
Implementation of programming languages.
Some programming paradigms, such as concurrency.
Consequently, these topics were included in a summarized way,
so that students could achieve a global view of them and
273
understand the main issues involved. The rest of the topics could
potentially be taught more deeply, but without forgetting that they
were offered to freshmen.
In terms of Bloom's taxonomy [2], the three topics above can be
mastered at the knowledge level, or even comprehension level.
However, for the rest of topics, we can expect students to achieve
the application and analysis levels, at least.
3.2 Overlapping with Other Courses
Some topics are also offered in other courses, either in the same
or in a subsequent year. Consequently, these topics can be
removed or dealt with more shallowly. The most probable
conflicts are:
Imperative programming, either procedural or object-oriented.
Grammars and formal languages.
Language processors.
In our case, there is an annual course on programming
methodology based on the imperative paradigm, but the other two
topics are not included in the curriculum. The programming
methodology course is offered in the first academic year.
Consequently, we removed the imperative paradigm, except for
its use in some illustrating examples, mainly in part V. However,
we kept chapters on grammars and formal languages, and on
language processors.
3.3 Preferences and Specialization of Faculty
This factor is important in order to choose among equally eligible
options, or to give broader coverage of some topics. In particular,
faculty can be more familiar with some paradigms than with
others. This was over-riding for our choice of the programming
paradigm.
In subsection 2.3 we discussed suitable paradigms for freshmen,
and we concluded that Computing Curricula 2001 fosters the
selection of the procedural, object-oriented, and functional
paradigms. We discarded the procedural paradigm as it is
concurrently taught in the programming methodology courses.
Finally, our specialization gave priority to the functional
paradigm over object-orientation. The reader can find many
experiences in the literature, but we recommend a monograph on
functional programming in education [13].
Functional programming is a paradigm with several advantages,
such as short definition of languages, simple and concise
programs, high level of abstraction, etc. However, its main
advantage for us is richness of elements. This allows us to deal
with many aspects of programming languages (e.g. data types,
recursion, polymorphism, etc.) in a natural and easy way.
The use of tiny languages is another attractive choice in a course
for freshmen. However, we also discarded them in favor of the
functional paradigm because they have fewer language elements.
3.4 Coherence and Unifying Themes
A key issue in a course based on several approaches is to provide
contents coherence. A network of relationships among the
different parts makes possible their coherent integration.
Part III (description and processing of languages) is the pragmatic
continuation of part II (formal grammars). Thus, EBNF and
syntactic charts are introduced in part III as more adequate
notations for language description than pure grammar definitions.
Language processing is given at a conceptual level, but the role of
regular and syntax-free grammars in the architecture of language
processors is highlighted.
Parts IV and V are both based on a functional language, which is
described with the tools given in part III, mainly EBNF and type
constraints.
Parts IV and V are also related because they are based on the
same language. In order to provide more homogeneity, language
elements studied in part V are introduced in a universal way, but
they are mainly illustrated with the functional paradigm.
Last, but not least, recursion is adopted as a recurring theme
during the course. In effect, it is found in grammars, functions and
data types. The recurrent presentation of this topic fosters deeper
understanding by students.
The "pure" definition of recursion is given early in the course, but
its three instantiations enumerated above are studied later. For
each instantiation, the mechanisms that accompany a recursive
definition are clearly identified, in particular representation of
information and operational semantics [16]. For instance,
recursive grammars represent sentences as strings of terminal
symbols, and its operating semantics is defined in terms of
derivation of sentences. However, recursive functions represent
information as expressions, and its operating semantics is defined
in terms of term rewriting.
3.5 Time Constraints
As a final factor, time constraints limit the depth of study of those
topics that could be studied longer. A global view of the course
schedule is given in Table 2.
Table 2 Schedule of the course
Part
Theory #hours
Lab #hours
Part I
5
Part
II
14
8
Part III
10
Part
IV
12
6
Part V
8
6
Part VI
4
2
In part II (formal grammars), regular and context-free grammars
are the only ones studied in depth because of their importance for
language description and processing.
Moreover, only one paradigm can be studied in depth. Even so,
the lack of time limits the presentation of functional programming
(parts IV and V) to the core elements of the paradigm. Other
elements, important for the functional programmer, can not be
addressed (e.g. higher-order, lazy evaluation, or currification).
However, this is not a serious drawback since the aim of including
functional programming in the course is teaching the essentials of
a new paradigm as well as illustrating language elements.
274
LABORATORY COURSEWARE
A course on programming languages must have a laboratory
component. The laboratory schedule includes sessions for those
parts of the course where problem solving can be faced, mainly
formal grammars and functional programming. Laboratory tools
were selected carefully so that they are adequate for freshmen to
exercise non-trivial concepts; simple user interaction and
visualization facilities are of great help here. There are a number
of tools that fulfill these requirements. For formal grammars, we
require simulators that allow at least manipulating regular
expressions, finite automata, context-free grammars and
derivation trees. Our final selection was JFLAP [11]. For
functional programming, we require a programming environment
that shows term rewriting as the operational semantics. Our final
selection was WinHIPE [15].
EXPERIENCE
We have been teaching this course for seven years. Although the
basic structure has roughly been constant, it was refined
according to our experience. In particular, the emphasis on
recursion was introduced after several years as we noticed student
problems with this concept. We consider that we have succeeded,
at least in eliminating the magical connotation of recursion.
A major change was the relative order of chapters on formal
grammars and the functional paradigm. During the first year, they
were given in reverse order. However, students had problems in
understanding the syntax of functional declarations that led us to
teach in the first place formal grammars (and therefore syntax
notations such as EBNF). Thus, a foundation to declare syntax
was laid and then used to introduce functional programming.
The literature classifies the main difficulties for teaching
functional programming into syntactical, conceptual and
"psychological" problems [13]. In our approach, the two former
kinds of problems are avoided, but the latter remains. As
freshmen learn concurrently the functional and one imperative
language, they get the idea that functional is an exotic, useless
paradigm.
CONCLUSION
We have described a course on programming languages for
freshmen. It comprises elements from four different approaches.
We have described the contents of the course, and we have
explained the factors that led us to its current design. The
experience has been very positive both for teachers and for
students. As the Denning report sought for CS1, we consider that
our course illustrates that it is feasible to offer some traditionally
intermediate or advanced matters in introductory courses.
ACKNOWLEDGMENTS
This work is supported by the research project TIN2004-07568 of
the Spanish Ministry of Education and Science.
REFERENCES
[1] Abelson, H., and Sussman, G.J. Structure and Interpretation of
Computer Programs. MIT Press, 2 ed., 1996.
[2] Bloom, B., Furst, E., Hill, W., and Krathwohl, D.R.
Taxonomy of Educational Objectives: Handbook I, The
Cognitive Domain. Addison-Wesley, 1959.
[3] Curriculum Committee on Computer Science. Curriculum
'68: Recommendations for academic programs in computer
science. Comm. ACM, 11, 3 (March 1968), 151-197.
[4] Denning, P. et al. Computing as a Discipline. ACM Press,
New York, 1988.
[5] Fischer, A.E., and Grodzinsky, F.S. The Anatomy of
Programming Languages. Prentice-Hall, 1993.
[6] Kamin, S.N. Programming Languages: An Interpreter-Based
Approach. Addison-Wesley, 1990.
[7] King, K.N. The evolution of the programming languages
course. In 23
rd
SIGCSE Technical Symposium on Computer
Science Education (SIGCSE'92). ACM Press, New York,
1992, 213-219.
[8] Kolesar, M.V., and Allan, V.H. Teaching computer science
concepts and problem solving with a spreadsheet. In 26
th
SIGCSE Technical Symposium on Computer Science
Education (SIGCSE'95). ACM Press, New York, 1995, 10-13
.
[9] Maurer, W.D. The comparative programming languages
course: A new chain of development. In 33
rd
SIGCSE
Technical Symposium on Computer Science Education
(SIGCSE 2002). ACM Press, New York, 2002, 336-340.
[10] Popyack, J.L., and Herrmann, N. Why everyone should
know how to program a computer. In IFIP World
Conference on Computers in Education VI (WCCE'95).
Chapman & Hall, 1995, 603-612.
[11] Hung, T., and Rodger, S.H. Increasing visualization and
interaction in the automata theory course. In 31
st
SIGCSE
Technical Symposium on Computer Science Education
(SIGCSE 2000). ACM Press, New York, 2000, 6-10.
[12] The Joint Task Force on Computing Curricula IEEE-CS/ACM
: Computing Curricula 2001 Computer Science,
http://www.computer.org/education/cc2001/final, 2001.
[13] Thomson, S., and Wadler, P. (eds.) Functional programming
in education. Journal of Functional Programming, 3, 1
(1993).
[14] Tucker, A.B., and Noonan, R.E. Integrating formal models
into the programming languages course. In 33
rd
SIGCSE
Technical Symposium on Computer Science Education
(SIGCSE 2002). ACM Press, New York, 2002, 346-350.
[15] Velzquez-Iturbide, J.. Improving functional programming
environments for education. In M. D. Brouwer-Hanse y T.
Harrington (eds.), Man-Machine Communication for
Educational Systems Design. Springer-Verlag, NATO ASI
Series F 124, 1994, 325-332.
[16] Velzquez-Iturbide, J.. Recursion in gradual steps (is
recursion really that difficult?). In 31
st
SIGCSE Technical
Symposium on Computer Science Education (SIGCSE 2000).
ACM Press, New York, 2000, 310-314.
[17] Wick, M.R., and Stevenson, D.E. A reductionistic approach
to a course on programming languages. In 32
nd
SIGCSE
Technical Symposium on Computer Science Education
(SIGCSE 2001). ACM Press, New York, 2001, 253-257.
275 | programming language course;language description;formal grammars;laboratory component;functional programming;computer science;programming methodology;Programming languages;recursion;curriculum;freshmen;topics;programming paradigms |
160 | Query Result Ranking over E-commerce Web Databases | To deal with the problem of too many results returned from an E-commerce Web database in response to a user query, this paper proposes a novel approach to rank the query results. Based on the user query, we speculate how much the user cares about each attribute and assign a corresponding weight to it. Then, for each tuple in the query result, each attribute value is assigned a score according to its "desirableness" to the user. These attribute value scores are combined according to the attribute weights to get a final ranking score for each tuple. Tuples with the top ranking scores are presented to the user first. Our ranking method is domain independent and requires no user feedback. Experimental results demonstrate that this ranking method can effectively capture a user's preferences. | INTRODUCTION
With the rapid expansion of the World Wide Web, more and more
Web databases are available. At the same time, the size of existing
Web databases is growing rapidly. One common problem faced by
Web users is that there is usually too many query results returned
for a submitted query. For example, when a user submits a query to
autos.yahoo.com to search for a used car within 50 miles of New
York with a price between $5,000 and $10,000, 10,483 records are
returned. In order to find "the best deal", the user has to go through
this long list and compare the cars to each other, which is a tedious
and time-consuming task.
Most Web databases rank their query results in ascending or
descending order according to a single attribute (e.g., sorted by date,
sorted by price, etc.). However, many users probably consider
multiple attributes simultaneously when judging the relevance or
desirableness of a result. While some extensions to SQL allow the
user to specify attribute weights according to their importance to
him/her [21], [26], this approach is cumbersome and most likely
hard to do for most users since they have no clear idea how to set
appropriate weights for different attributes. Furthermore, the user-setting
-weight approach is not applicable for categorical attributes.
In this paper, we tackle the many-query-result problem for Web
databases by proposing an automatic ranking method, QRRE
(Query Result Ranking for E-commerce), which can rank the query
results from an E-commerce Web database without any user
feedback. We focus specifically on E-commerce Web databases
because they comprise a large part of today's online databases. In
addition, most E-commerce customers are ordinary users who may
not know how to precisely express their interests by formulating
database queries. The carDB Web database is used in the following
examples to illustrate the intuitions on which QRRE is based.
Example 1: Consider a used Car-selling Web database D with a
single table carDB in which the car instances are stored as tuples
with attributes: Make, Model, Year, Price, Mileage and Location.
Each tuple t
i
in D represents a used car for sale.
Given a tuple t
i
in the query result T
q
for a query q that is submitted
by a buyer, we assign a ranking score to t
i
, based on its attribute
values, which indicates its desirableness, from an E-commerce
viewpoint, to the buyer. For instance, it is obvious that a luxury,
new and cheap car is globally popular and desired in the used car
market. However, sometimes the desired attribute values conflict
with each other. For example, a new luxury car with low mileage is
unlikely to be cheap. Hence, we need to decide which attributes are
more important for a buyer. Some buyer may care more about the
model of a car, while some other buyer may be more concerned
about its price. For each attribute, we use a weight to denote its
importance to the user.
In this work, we assume that the attributes about which a user cares
most are present in the query he/she submits, from which the
attribute importance can be inferred. We define specified attributes
to be attributes that are specified in a query and unspecified
attributes to be attributes that are not specified in a query.
Furthermore, we also consider that a subset of the unspecified
attributes, namely, those attributes that are closely correlated to the
query, is also important.
Example 2: Given a query with condition "Year > 2005", the query
condition suggests that the user wants a relatively new car.
Intuitively, besides the Year attribute, the user is more concerned
about the Mileage than he/she is concerned about the Make and
Location, considering that a relatively new car usually has low
mileage.
Given an unspecified attribute A
i
, the correlation between A
i
and the
user query q is evaluated by the difference between the distribution
of A
i
's values over the query results and their distribution over the
whole database D. The bigger the difference, the more A
i
correlates
to the specified attribute value(s). Consequently, we assign a bigger
attribute weight to A
i
. Example 3 explains our intuition.
Example 3: Suppose a used car database D contains car instances
for which the Year has values 1975 and onwards and D returns a
subset d containing the tuples that satisfy the query with condition
"Year > 2005". Intuitively, Mileage values of the tuples in d
distribute in a small and dense range with a relatively low average,
while the Mileage values of tuples in D distribute in a large range
with a relatively high average. The distribution difference shows a
close correlation between the unspecified attribute, namely,
Mileage, and the query "Year > 2005".
Besides the attribute weight, we also assign a preference score to
each attribute value, including the values of both specified and
unspecified attributes. In the E-commerce context, we first assume
that an expensive product is less preferred than a cheap product if
other attribute values are equal. Hence, we assign a small preference
score for a high Price value and a large preference score for a low
Price value. We further assume that a non-Price attribute value with
high desirableness, such as a luxury car or a new car, correlates
positively with a high Price value. Thus, a luxury car is more
expensive than a standard car and a new car is usually more
expensive than an old car. Based on this assumption, we convert a
value a
i
of a non-Price attribute A
i
to a Price value p'
I
where p'
I
is
the average price of the product for A
i
= a
i
in the database.
Consequently, the preference score for a
i
will be large if p'
I
is large
because a large Price value denotes a high desirableness for the user.
Finally, the attribute weight and the value preference score are
combined to get the final ranking score for each tuple. The tuples
with the largest ranking scores are presented to the user first.
The contributions of this paper include the following:
1. We present a novel approach to rank the tuples in the query
results returned by E-commerce Web databases.
2. We propose a new attribute importance learning approach,
which is domain independent and query adaptive.
3. We also propose a new attribute-value preference score
assignment approach for E-commerce Web databases.
In the entire ranking process, no user feedback is required.
The rest of the paper is organized as follows. Section 2 reviews
some related work. Section 3 gives a formal definition of the many-query
-result problem and presents an overview of QRRE. Section 4
proposes our attribute importance learning approach while Section 5
presents our attribute preference score assignment approach.
Experimental results are reported in Section 6. The paper is
concluded in Section 7.
RELATED WORK
Query result ranking has been investigated in information retrieval
for a long time. Cosine Similarity with TF-IDF weighting of the
vector space model [2] and [26], the probabilistic ranking model
[30] and [31] and the statistical language model [12] have been
successfully used for ranking purposes. In addition, [10], [11], [14]
and [15] explore the integration of database and information
retrieval techniques to rank tuples with text attributes. [1], [5] and
[17] propose some keyword-query based retrieval techniques for
databases. However, most of these techniques focus on text
attributes and it is very difficult to apply these techniques to rank
tuples with categorical or numerical attributes.
Some recent research addresses the problem of relational query
result ranking. In [9], [26], [28] and [33], user relevance feedback is
employed to learn the similarity between a result record and the
query, which is used to rank the query results in relational
multimedia databases. In [21] and [26], the SQL query language is
extended to allow the user to specify the ranking function according
to their preference for the attributes. In [18] and [19], users are
required to build profiles so that the query result is ranked according
to their profile. Compared with the above work, our approach is
fully automatic and does not require user feedback or other human
involvement.
In [1] and [12], two ranking methods have been proposed that take
advantage of the links (i.e., associations) among records, such as the
citation information between papers. Unfortunately, linking
information among records does not exist for most domains.
The work that is most similar to ours is the probabilistic information
retrieval (PIR) model in [8], which addresses the many-query-result
problem in a probabilistic framework. In PIR, the ranking score is
composed of two factors: global score, which captures the global
importance of unspecified values, and conditional score, which
captures the strength of the dependencies between specified and
unspecified attribute values. The two scores are combined using a
probabilistic approach. Our approach differs from that in [8] in the
following aspects:
1. PIR only focuses on point queries, such as "A
i
= a
i
". Hence, both
a query with condition "Mileage < 5000" and a query with
condition "Mileage < 2500" may have to be converted to a
query with condition "Mileage = small" to be a valid query in
PIR, which is not reasonable for many cases. In contrast, QRRE
can handle both point and range queries.
2. PIR focuses on the unspecified attributes during query result
ranking while QRRE deals with both specified and unspecified
attributes. For example, suppose a car with price less than
$10,000 is labeled as a "cheap" car. For a query "Price < 10000",
PIR will only consider the value difference for non-Price
attributes among tuples and ignore the price difference, which is
usually important for a poor buyer. On the contrary, QRRE will
consider the value difference for all attributes.
3. A workload containing past user queries is required by PIR in
order to learn the dependency between the specified and
unspecified attribute values, which is unavailable for new online
databases, while QRRE does not require such a workload.
The experimental results in Section 6 show that QRRE produces a
better quality ranking than does PIR.
The attribute-importance learning problem was studied in [23] and
[24], in which attribute importance is learned according to the
attribute dependencies. In [23], a Bayesian network is built to
discover the dependencies among attributes. The root attribute is the
most important while the leaf attributes are less important.
576
In [24], an attribute dependency graph is built to discover the
attribute dependencies. Both of these methods learn the attribute
importance based on some pre-extracted data and their result is
invariant to the user queries. Furthermore, both methods can only
determine the attribute importance sequence. They are incapable of
giving a specific value to show how important each attribute is. In
contrast, the attribute-importance learning method presented in this
paper can be adapted to the user's query and thus can be tailored to
take into account the desire of different users, since each attribute is
assigned a weight that denotes its importance for the user. To our
knowledge, this is the first work that generates attribute weights that
are adaptive to the query the user submitted.
QUERY RESULT RANKING
In this section, we first define the many-query-result problem and
then present an overview of QRRE.
3.1 Problem Formulation
Consider an autonomous Web database D with attributes A={A
1
, A
2
,
..., A
m
} and a selection query q over D with a conjunctive selection
condition that may include point queries, such as "A
i
= a
i
", or range
queries, such as "a
i1
< A
i
< a
i2
". Let T={t
1
, t
2
, ..., t
n
} be the set of
result tuples returned by D for the query q. In many cases, if q is not
a selective query, it will produce a large number of query results
(i.e., a large T). The goal is to develop a ranking function to rank the
tuples in T that captures the user's preference, is domain-independent
and does not require any user feedback.
3.2 QRRE
Initially, we focus on E-commerce Web databases because E-commerce
Web databases comprise a large proportion of the
databases on the Web. We further assume that each E-commerce
Web database has a Price attribute, which we always assume to be
A
1
. The Price attribute A
1
plays an intermediate role for all attributes
during the attribute preference score assignment.
Example 4: Consider the tuples in Table 1 that represent an
example query result set T. It can be seen that most tuples have their
own advantages when compared with other tuples. For example, t
1
is a relatively new car while t
2
is a luxury car and t
3
is the cheapest
among all cars. Hence, depending on a user's preferences, different
rankings may be needed for different users. Assuming that a user
would prefer to pay the smallest amount for a car and that all other
attribute values are equal, then the only certainty is that t
4
should
always be ranked after t
3
because its mileage is higher than t
3
while
it is more expensive than t
3
.
Table 1. Examples of used car tuples.
Year Make Model Mileage Price Location
t
1
2005 Toyota Corolla 16995 26700 Seattle
t
2
2002 Mercedes-Benz
G500 47900 39825 Seattle
t
3
2002 Nissan 350Z 26850 17448 Seattle
t
4
2002 Nissan 350Z 26985 18128 Seattle
According to Example 4, two problems need to be solved when we
assign a ranking score for a tuple t
i
={t
i1
, t
i2
, ..., t
im
} in the query
result T:
1. How can we surmise how much a user cares about an attribute
A
j
and how should we assign a suitable weight w
j
for the
attribute(s) A
j
to reflect its (their) importance to the user?
2. How do we assign a preference score v
ij
for an attribute value t
ij
?
For example, when assigning the score for the attribute value "Year
= 2005" in t
1
, should the score be larger than the score assigned for
attribute value "Year = 2002" in t
2
and how much larger is
reasonable? The first problem will be discussed in Section 4. The
second problem will be discussed in Section 5.
Having assigned a preference score v
ij
(1jm) to each attribute-value
of t
i
and a weight w
j
to the attribute A
j
, the value preference
scores v
ij
are summed to obtain the ranking score s
i
for t
i
to reflect
the attribute importance for the user. That is:
=
=
m
j
ij
j
i
v
w
s
1
The overall architecture of a system employing QRRE is shown in
Figure 1. Such a system includes two components: pre-processing
component and online processing component. The pre-processing
component collects statistics about the Web database D using a set
Figure 1: Architecture of a system employing Query Result Ranking for E-commerce (QRRE).
577
of selected queries. Two kinds of histograms are built in the preprocessing
step: single-attribute histograms and bi-attribute
histograms. A single-attribute histogram is built for each attribute A
j
.
A bi-attribute histogram is built for each non-Price attribute (i.e., A
j
in which i>1) using the Price attribute A
1
.
The online-processing component ranks the query results given the
user query q. After getting the query results T from the Web
database D for q, a weight is assigned for each attribute by
comparing its data distribution in D and in the query results T. At
the same time, the preference score for each attribute value in the
query result is determined using the information from the bi-attribute
histograms. The attribute weights and preference scores are
combined to calculate the ranking score for each tuple in the query
result. The tuples' ranking scores are sorted and the top K tuples
with the largest ranking scores are presented to the user first.
ATTRIBUTE WEIGHT ASSIGNMENT
In the real world, different users have different preferences. Some
people prefer luxury cars while some people care more about price
than anything else. Hence, we need to surmise the user's preference
when we make recommendations to the user as shown by Example
4 in Section 3. The difficulty of this problem lies in trying to
determine what a user`s preference is (i.e., which attributes are more
important) when no user feedback is provided. To address this
problem, we start from the query the user submitted. We assume
that the user's preference is reflected in the submitted query and,
hence, we use the query as a hint for assigning weights to attributes.
The following example provides the intuition for our attribute
weight assignment approach.
Example 5: Consider the query q with condition "Year > 2005",
which denotes that the user prefers a relatively new car. It is
obvious that the specified attribute Year is important for the user.
However, all the tuples in the query result T satisfy the query
condition. Hence, we need to look beyond the specified attribute and
speculate further about what the user's preferences may be from the
specified attribute. Since the user is interested in cars that are made
after 2005, we may speculate that the user cares about the Mileage
of the car. Considering the distribution of Mileage values in the
database, cars whose model year is greater than 2005 usually have
a lower mileage when compared to all other cars. In contrast,
attribute Location is less important for the user and its distribution
in cars whose model year is greater than 2005 may be similar to the
distribution in the entire database.
According to this intuition, an attribute A
j
that correlates closely
with the query will be assigned a large weight and vice verse.
Furthermore, as Example 3 in Section 1 shows, the correlation of A
j
and the query can be measured by the data distribution difference of
A
j
in D and in T.
It should be noted that the specified attribute is not always important,
especially when the condition for the specified attribute is not
selective. For example, for a query with condition "Year > 1995 and
Make = BMW", the specified attribute Year is not important
because almost all tuples in the database satisfy the condition
"Year > 1995" and the Year distribution in D and in T is similar.
A natural measure of the distribution difference of A
j
in D and in T
is the Kullback-Leibler distance or Kullback-Leibler (KL)
divergence [13]. Suppose that A
j
is a categorical attribute with value
set {a
j1
, a
j2
, ..., a
jk
}. Then the KL-divergence of A
j
from D to T is:
=
=
=
=
=
k
l
jl
j
jl
j
jl
j
KL
T
a
A
prob
D
a
A
prob
D
a
A
prob
T
D
D
1
)
|
(
)
|
(
log
)
|
(
)
||
(
(1)
in which prob(A
j
=a
jl
| D) refers to the probability that A
j
= a
jl
in D
and prob(A
j
=a
jl
| T) refers to the probability that A
j
= a
jl
in T. If A
j
is
a numerical attribute, its value range is first discretized into a few
value sets, where each set refers to a category, and then the KL-divergence
of A
j
is calculated as in (1).
4.1 Histogram Construction
To calculate the KL-divergence in equation (1) we need to obtain
the distribution of attribute values over D. The difficulty here is that
we are dealing with an autonomous database and we do not have
full access to all the data. In [24], the attribute value distribution
over a collection of data crawled from D is used to estimate the
actual attribute value distribution over D. However, it is highly
likely that the distribution of the crawled data can be different from
that of D because the crawled data may be biased to the submitted
queries.
In this paper, we propose a probing-and-count based method to
build a histogram for an attribute over a Web database
1
. We assume
that the number of query results is available in D for a given query.
After submitting a set of selected queries to D, we can extract the
number of query results, instead of the actual query results, to get
the attribute value distribution of A
i
. An equi-depth histogram [27] is
used to represent the attribute value distribution, from which we will
get the probability required in Equation (1). The key problem in our
histogram construction for A
i
is how to generate a set of suitable
queries to probe D.
Figure 2 shows the algorithm for building a histogram for attribute
A
i
. For each attribute A
i
, a histogram is built in the preprocessing
stage. We assume that one attribute value of A
i
is enough to be a
query for D. If A
i
is a categorical attribute, each category of A
i
is
used as a query to get its occurrence count (Lines 2-3). If A
i
is a
numerical attribute, an equal-depth histogram is built for A
i
. We first
decide the occurrence frequency
threshold t for each bucket by
dividing |D|, namely, the number of tuples in D, with the minimum
bucket number n that will be created for a numerical attribute A
i
. In
our experiments, n is empirically set to be 20. Then we probe D
using a query with condition on A
i
such that low A
i
<up and get c,
the number of instances in that range (Line 8). If c is smaller than t,
a bucket is added for it in H
Di
(Line 10) and another query probe is
prepared (Line 11). Otherwise, we update the query probe condition
on A
i
by reducing the size of the bucket (Line 13) and a new
iteration begins. The iteration continues until each value in the value
range is in a bucket. It is obvious that there are some improvements
that can be made to the algorithm to accelerate the histogram
construction. The improvements are not described here because
histogram construction is not the major focus of this paper.
Considering that only a single-attribute histogram is constructed, the
process should complete quickly.
1
Although both our histogram construction method and the
histogram construction methods in [1] and [5] are probing-based
, they have different goals. The goal in [1] and [5] is to
build a histogram that precisely describes the regions on which
the queries concentrate, while our purpose is to build a
histogram that summarizes the data distribution of D as
precisely as possible with a number of query probes.
578
A histogram H
Ti
also needs to be built for A
i
over T (the result set) to
get its probability distribution over T. For each bucket of H
Di
, a
bucket with the same bucket boundary is built in H
Ti
while its
frequency is counted in T.
4.2 Attribute Weight Setting
After getting the histogram of A
i
over D and T, the histograms are
converted to a probability distribution by dividing the frequency in
each bucket of the histogram by the bucket frequency sum of the
histogram. That is, the probability distribution of A
i
for D, P
Di
, is
|
|
|
|
D
c
p
Dk
Di
=
in which c
Dk
is the frequency of the k
th
bucket in H
Di
. The
probability distribution of A
i
for T, P
Ti
, is
|
|
|
|
T
c
p
Tk
Ti
=
in which c
Tk
is the frequency of the k
th
bucket in H
Ti
.
Next, for the i
th
attribute A
i
, we assign its importance w
i
as
=
=
m
j
Tj
Dj
Ti
Di
i
P
P
KL
P
P
KL
w
1
)
,
(
)
,
(
The attribute weight assignment is performed not only on the
unspecified attributes, but also on the specified attributes. If a
specified attribute is a point condition, its attribute weight will be
the same for all tuples in the query result. If a specified attribute is a
range condition, its attribute weight will be different for the tuples in
the query result. Example 6 illustrates this point.
Example 6: Consider a query q with condition "Make = 2004 and
Price < 10000". In q, since the specified attribute Make is a point
attribute, the attribute weight assigned to it is useless because all
the query results have the same value for Make. On the other hand,
since the attribute Price is a range attribute, the price of different
tuples is an important factor to consider during ranking.
4.3 Examples of Attribute Weight Assignment
In our experiments, we found that the attribute weight assignment
was intuitive and reasonable for the given queries. Table 2 shows
the attribute weight assigned to different attributes corresponding to
different queries in our experiments for the carDB. Given a query
with condition "Mileage < 20000", which means that the user
prefers a new car, as expected the attribute "Mileage" is assigned a
large weight because it is a specified attribute and the attribute
"Year" is assigned a large weight too. The attribute "Model" is
assigned a large weight because a new car usually has a model that
appears recently. In contrast, Consider the query with condition
"Make = BMW & Mileage < 100000".
The
sub-condition
"Mileage < 100000" possesses a very weak selective capability
because almost all tuples in the database satisfy it. The buyer is
actually just concerned about the Make and the Model of the car. As
expected, the attribute Make and Model are assigned large weights,
while Year and Mileage are no longer assigned large weights.
Table 2: Attribute weight assignments for two queries.
Mileage < 20000
Make = BMW & Mileage < 100000
Year 0.222
0.015
Make 0.017
0.408
Model 0.181
0.408
Price 0.045
0.120
Mileage 0.533
0.04
Location 0.0003
0.002
ATTRIBUTE PREFERENCE SCORE ASSIGNMENT
In addition to the attributes themselves, different values of an
attribute may have different attractions for the buyer. For example, a
car with a low price is obviously more attractive than a more
expensive one if other attribute values are the same. Similarly, a car
with low mileage is also obviously more desirable. Given an
attribute value, the goal of the attribute preference score assignment
module is to assign a preference score to it that reflects its
desirableness for the buyer. To facilitate the combination of scores
of different attribute values, all scores assigned for different attribute
values are in [0, 1].
Instead of requiring human involvement for attribute value
assignment, given a normal E-commerce context, we make the
following two intuitive assumptions:
1. Price assumption: A product with a lower price is always more
desired by buyers than a product with a higher price if the other
attributes of the two products have the same values. For
example, if all other attribute values are the same, a cheaper car
is preferred over a more expensive car.
2. Non-Price assumption: A non-Price attribute value with higher
desirableness for the user corresponds to a higher price. For
example, a new car, which most buyers prefer, is usually more
Input: Attribute A
i
and its value range
Web database D with the total number of tuples | D |
Minimum bucket number n
Output: A single-attribute histogram H
Di
for A
i
Method:
1. If A
i
is a categorical attribute
2. For each category a
ij
of A
i
, probe D using a query with
condition "A
i
=a
ij
" and get its occurrence count c
3. Add a bucket (a
ij
, c) into H
D
i
4. If A
i
is a numerical value attribute with value range [a
low
, a
up
)
5. t = |D| / n
6. low = a
low
, up = a
up
7. Do
8. probe D with a query with condition "low A
i
<up"
and get its occurrence count c
9.
if c t
10.
Add a bucket (low, up, c) into H
D
i
11.
low = up, up = a
up
12. else
13. up = low + (up - low) / 2
14. While low < a
up
15.
Return H
D
i
Figure 2: Probing-based histogram construction algorithm.
579
expensive than an old car. Likewise, a luxury car is usually
more expensive than an ordinary car.
With the above two assumptions, we divide the attributes into two
sets: Price attribute set, which only includes the attribute Price, and
non-Price attribute set, which includes all attributes except Price.
The two sets of attributes are handled in different ways.
According to the Price assumption, we assign a large score for a low
price and a small score for a high price. To avoid requiring human
involvement to assign a suitable score for a Price value, the Price
distribution in D is used to assign the scores. Given a Price value t, a
score v
t
is assigned to it as the percentage of tuples whose Price
value is bigger than t
i
in D:
|
| D
S
v
t
t
=
in which S
t
denotes the number of tuples whose Price value is bigger
than t. In our experiments, the histogram for the attribute Price A
1
,
whose construction method is described in Section 4, is used for the
Price preference score assignment.
Figure 3 shows the algorithm used to assign a score v for a Price
value t using the Price histogram. Given the Price histogram H
D1
,
the frequency sum is fist calculated (Line 1). Then we count the
number S
t
of tuples whose Price value is bigger than t. For each
bucket in H
D1
, if the lower boundary of the bucket is bigger than t, it
means that all the tuples for this bucket have a Price value bigger
than t and the frequency of this bucket is added to S
t
(Line 4). If t is
within the boundary of the bucket, we assume that the Price has a
uniform distribution in the bucket and a fraction of the frequency in
this bucket is added to S
t
(Line 5). If the upper boundary of the
bucket is smaller than t, it means that all the tuples for this bucket
have a Price value lower than t and the bucket is ignored. Finally the
ratio is generated by dividing S
t
with the frequency sum.
For a value a
i
of a non-Price attribute A
i
, the difficulty of assigning
it a score is two fold:
1. How to make the attribute preference score assignment adaptive
for different attributes? Our goal is to have an intuitive
assignment for each attribute without human involvement. The
difficulty is that different attributes can have totally different
attribute values.
2. How to establish the correspondence between different
attributes? For example, how can we know that the
desirableness of "Year = 2005" is the same as the desirableness
of "Mileage = 5000" for most users?
We solve the problem in two steps. First, based on the non-Price
assumption, we can convert a non-Price value a
i
to a Price attribute
value t
i
:
If A
i
is a categorical attribute, t
i
is the average price for all tuples
in D such that A
i
=a
i
.
If A
i
is a numerical attribute, v
i
is the average price for all tuples
in D such that a
i
-d < A
i
< a
i
+d where d is used to prevent too
few tuples or no tuple being collected if we just simply set A
i
=a
i
.
In our experiments, a bi-attribute histogram (A
1
, A
i
) is used when a
i
is converted to a Price value. The bi-attribute histograms are built in
the pre-processing step in a way similar to the histogram
construction described in Section 4.
Second, after converting all non-Price attribute values to Price
values, we use a uniform mechanism to assign them a preference
score. We assign a large score for a large Price value according to
the non-Price assumption. That is, given a converted Price value t
i
, a
preference score v
i
is assigned to it as the percentage of Price values
that is smaller than t
i
in D. The algorithm for the converted Price
preference score assignment can be easily adapted from the
algorithm in Figure 3.
5.1 Examples of Attribute Preference Score
Assignment
Table 3 shows the average Price and assigned score for different
Make values for the carDB database used in our experiments. It can
be seen that the prices for different car makes fit our intuition well.
Luxury cars are evaluated to have a higher price than standard cars
and, consequently, are assigned a larger preference score. We found
that the attribute preference assignments for other attributes in
carDB are intuitive too.
Table 3: Make-Price-Score correspondence.
Make
Average Price
Score
Mitsubishi 12899 0.183
Volkswagen 16001 0.372
Honda 16175
0.373
Toyota 16585
0.387
Acura 20875
0.599
BMW 33596
0.893
Benz 37930
0.923
EXPERIMENTS
In this section, we describe our experiments, report the QRRE
experimental results and compare them with some related work. We
first introduce the databases we used and the related work for
comparison. Then we informally give some examples of query
result ranking to provide some intuition for our experiments. Next, a
more formal evaluation of the ranking results is presented. Finally,
the running time statistics are presented.
6.1 Experimental Setup
To evaluate how well different ranking approaches capture a user's
preference, five postgraduate students were invited to participate in
the experiments and behave as buyers from the E-commerce
databases.
Input: Price histogram H
D1
={(c
1
,low
1
,up
1
), ..., (c
m
, low
m
,
up
m
)}
Price value t
Output: Price score v
Method:
1.
=
i
i
c
sum
2.
S
t
= 0
3.
For i =1..m
4.
if (low
m
> t ) S
t
= S
t
+ c
i
5.
if (low
m
< t < up
m
) S
t
=S
t
+ c
i
* (t -low
m
)/( up
m
- low
m
)
6.
v = S
t
/ sum
7.
return v
Figure 3: Price value score assignment algorithm.
580
6.1.1 Databases
For our evaluation, we set up two databases from two domains in E-commerce
. The first database is a used car database carDB(Make,
Model, Year, Price, Mileage, Location) containing 100,000 tuples
extracted from Yahoo! Autos. The attributes Make, Model, Year
and Location are categorical attributes and the attributes Price and
Mileage are numerical attributes. The second database is a real
estate database houseDB(City, Location, Bedrooms, Bathrooms, Sq
Ft, Price) containing 20,000 tuples extracted from Yahoo! Real
Estate. The attributes City, Location, Bedrooms and Bathrooms are
categorical attributes and the attributes Sq Ft and Price are
numerical attributes. To simulate the Web databases for our
experiments we used MySQL on a P4 3.2-GHz PC with 1GB of
RAM . We implemented all algorithms in JAVA and connected to
the RDBMS by DAO.
6.1.2 Implemented Algorithms
Besides QRRE described above, we implemented two other ranking
methods, which are described briefly below, to compare with
QRRE.
RANDOM ranking model: In the RANDOM ranking model, the
tuples in the query result are presented to the user in a random order.
The RANDOM model provides a baseline to show how well QRRE
can capture the user behavior over a random method.
Probabilistic Information Retrieval (PIR) ranking model: A
probabilistic information retrieval (PIR) technique, which has been
successfully used in the Information Retrieval field, is used in [8]
for ranking query results. This technique addresses the same
problem as does QRRE. In PIR, given a tuple t, its ranking score is
given by the following equation:
=
Y
y
X
x
Y
y
D
y
x
p
W
y
x
p
D
y
p
W
y
p
t
Score
)
,
|
(
)
,
|
(
)
|
(
)
|
(
)
(
in which X is the specified attributes, Y is the unspecified attributes,
W is a past query workload and p denotes the probability.
As mentioned in Section 2, PIR work focuses on point queries
without considering range queries. Therefore, when applying the
PIR ranking model, the numerical attributes Price and Mileage in
carDB and Sq Ft and Price in houseDB are discretized into
meaningful ranges as categories, which in reality requires a domain
expert.
In PIR, a workload is required to obtain the conditional probability
used to measure the correlation between specified attribute values
present in the query and the unspecified attributes. In our
experiments, we requested 5 subjects to behave as different kinds of
buyers, such as rich people, clerks, students, women, etc. and post
queries against the databases. We collected 200 queries for each
database and these queries are used as the workload W for the PIR
model.
6.2 Examples of Query Result Ranking
When we examine the query result rankings, we find that the
ranking results of both QRRE and PIR are much more reasonable
and intuitive than that of RANDOM. However, there are some
interesting examples that show that the QRRE rankings are superior
to those of PIR. We found that the ranking result of QRRE is more
reasonable than that of PIR in several ways:
QRRE can discover an assumption that is implicitly held by a
buyer. For example, for a query with condition
"Mileage < 5000". QRRE ranks cars with Year = 2006 as the
top recommendation. Intuitively, this is because a 2006 model
year car usually has lower mileage and this is what the user is
looking for. However, PIR is unable to identify the importance
of Year because most users assume that Mileage itself is enough
to represent their preference and, consequently, the relationship
between Year and Mileage is not reflected in the workload.
In PIR, given a numerical attribute, its value range needs to be
discretized into meaningful categories and the values within a
category are assumed to be the same during ranking. For
example, if we assign a car with "Mileage < 10000" to be a
category "Mileage
=
small", then PIR will treat
"Mileage = 2000" to be the same as "Mileage = 9000", which is
obviously unreasonable. In contrast, QRRE will identify that
"Mileage = 2000" is more desirable than "Mileage = 9000".
QRRE considers the value difference of the specified attributes
of the tuples in the query result, while PIR ignores the difference.
For example, for a query with condition "Make = Mercedes-Benz
and Model = ML500 and Year > 2003", QRRE usually
ranks the cars that are made in this year first. However, PIR
does not take the Year difference in the query result records into
consideration during ranking.
Likewise, QRRE often produces a ranking better than does PIR for
houseDB. The actual evaluation in the following section confirms
these observations.
6.3 Ranking Evaluation
We now present a more formal evaluation of the query result
ranking quality. A survey is conducted to show how well each
ranking algorithm captures the user's preference. We evaluate the
query results in two ways: average precision and user preference
ranking.
6.3.1 Average Precision
In this experiment, each subject was asked to submit three queries
for carDB and one query for houseDB according to their preference.
Each query had on average 2.2 specified attributes for carDB and
2.4 specified attributes for houseDB. We found that all the attributes
of carDB and houseDB were specified in the collected queries at
least once. On average, for carDB, each query had a query result of
686 tuples, with the maximum being 4,213 tuples and the minimum
116 tuples. It can be seen that the many-query-result problem is a
common problem in reality. Each query for houseDB has a query
result of 166 tuples on average.
Since it is not practical to ask the subjects to rank the whole query
result for a query, we adopt the following strategy to compare the
performance of different ranking approaches. For each implemented
ranking algorithm, we collected the first 10 tuples that it
recommended. Hence, thirty tuples are collected in total. If there is
overlap among the recommended tuples from different algorithms,
we extract more tuples using the RANDOM algorithm so that thirty
unique tuples are collected in total.
Next, for each of the fifteen
queries, each subject was asked to rank the top 10 tuples as the
relevant tuples that they preferred most from the thirty unique tuples
collected for each query. During ranking, they were asked to behave
like real buyers to rank the records according to their preferences.
581
Table 4: Average precision for different ranking methods for
carDB.
QRRE PIR RANDOM
q1 0.72
0.52 0.08
q2 0.62
0.62 0.06
q3 0.72
0.22 0.06
q4 0.52
0.64 0.04
q5 0.84
0.78 0.06
q6 0.68
0.36 0.04
q7 0.92
0.46 0.02
q8 0.88
0.64 0.06
q9 0.78
0.62 0.04
q10 0.74
0.64 0.04
q11 0.56
0.66 0.06
q12 0.86
0.76 0.08
q13 0.84
0.36 0.02
q14 0.58
0.38 0.04
q15 0.76
0.66 0.06
Average
0.735 0.555 0.048
We use the Precision/Recall metrics to evaluate how well the user's
preference is captured by the different ranking algorithms. Precision
is the ratio obtained by dividing the number of retrieved tuples that
are relevant by the total number of retrieved tuples. Recall is the
ratio obtained by dividing the number of relevant tuples by the
number of tuples that are retrieved. In our experiments, both the
relevant tuples and the retrieved tuples are 10, which make the
Precision and Recall to be equal. Table 4 shows the average
precision of the different ranking methods for each query. It can be
seen that both QRRE and PIR consistently have a higher precision
than RANDOM. For 11 queries out of 15, the precision of QRRE is
higher than that of PIR. The precision of QRRE is equal to that of
PIR for two queries and is lower than that of PIR for the remaining
two queries. QRRE's average precision is 0.18 higher than that of
PIR. QRRE has a precision higher than 0.5 for each query while
PIR has a precision as low as 0.22 for q3. It should be noted that
there is some overlap between the top-10 ranked results of QRRE
and top-10 ranked results of PIR for most queries. Figure 4 and
Figure 5 show the average precision of the three ranking methods
graphically for both carDB and houseDB.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Query
A
ver
ag
e P
r
eci
s
i
o
n
QRRE
PIR
RANDOM
Figure 4: Average prevision for different ranking methods for
carDB.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1
2
3
4
5
Query
A
v
er
ag
e P
r
e
c
i
s
i
o
n
QRRE
PIR
RANDOM
Figure 5: Average precision for different ranking methods for
houseDB.
6.3.2 User Preference Ranking
In this experiment, 10 queries were collected from the 5 subjects for
carDB and 5 queries were collected for houseDB. After getting the
query results, they were ranked using the three ranking methods.
The ranking results were then provided to the subjects in order to let
them select which result they liked best.
Table 5 and Table 6 show the user preference ranking (UPR) of the
different ranking methods for each query for carDB and houseDB,
respectively. It can be seen that again both QRRE and PIR greatly
outperform RANDOM. In most cases, the subjects preferred the
ranking results of QRRE to that of PIR.
Table 5: UPR for different ranking methods for carDB.
QRRE PIR RANDOM
q1
0.8
0.2
0
q2
1
0
0
q3 0.6
0.4 0
q4 0.4
0.6 0
q5 0.4
0.6 0
q6 0.8
0.2
0
q7 1
0
0
q8 0.6
0.2 0.2
q9 0.8
0.2 0
q10 0.8
0.2 0
Average
0.72 0.26 0.02
Table 6: UPR for different ranking methods for houseDB.
QRRE PIR RANDOM
q1 0.4
0.4
0.2
q2 0.8
0.2 0
q3 1
0 0
q4 0.4
0.6 0
q5 0.6
0.4 0
Average
0.66 0.32 0.02
582
While these preliminary experiments indicate that QRRE is
promising and better than the existing work, a much larger scale
user study is necessary to conclusively establish this finding.
6.4 Performance Report
Using QRRE, histograms need to be constructed before the query
results can be ranked. The histogram construction time depends on
the number of buckets and the time to query the web to get the
number of occurrences for each bucket. However, in most cases the
histogram usually does not change very much over time and so
needs to be constructed only once in a given time period.
The query result ranking in the online processing part includes four
modules: the attribute weight assignment module, the attribute-value
preference score assignment module, the ranking score calculation
module and the ranking score sorting module. Each of the first three
modules has a time complexity of O(n) after constructing the
histogram, where n is the number of query results, and the ranking
score sorting module has a time complexity of O(nlog(n)). Hence,
the overall time complexity for the online processing stage is
O(nlog(n)).
Figure 6 shows the online execution time of the queries over carDB
as a function of the number of tuples in the query result. It can be
seen that the execution time of QRRE grows almost linearly with
the number of tuples in the query result. This is because ranking
score sorting is fairly quick even for a large amount of data and thus
most of the running time is spent in the first three modules.
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
0
500
1000
1500
2000
2500
3000
Tuple Number
E
x
e
c
ut
i
on Ti
m
e
(
m
s
)
Figure 6: Execution times for different numbers of query results
for carDB.
CONCLUSION
In this paper, a novel automated ranking approach for the many-query
-result problem in E-commerce is proposed. Starting from the
user query, we assume that the specified attributes are important for
the user. We also assume that the attributes that are highly
correlated with the query also are important to the user. We assign a
weight for each attribute according to its importance to the user.
Then, for each value in each tuple of the query result, a preference
score is assigned according to its desirableness in the E-commerce
context, where users are assumed to more prefer products with
lower prices. All preference scores are combined according to the
attribute weight assigned to each attribute. No domain knowledge or
user feedback is required in the whole process. Preliminary
experimental results indicate that QRRE captures the user
preference fairly well and better than existing works.
We acknowledge the following shortcoming of our approach, which
will be the focus for our future research. First, we do not deal with
string attributes, such as book titles or the comments for a house,
contained in many Web databases. It would be extremely useful to
find a method to incorporate string attributes into QRRE. Second,
QRRE has only been evaluated on small-scale datasets. We realize
that a large, comprehensive benchmark should be built to
extensively evaluate a query result ranking system, both for QRRE
and for future research. Finally, QRRE has been specifically tailored
for E-commerce Web databases. It would be interesting to extend
QRRE to also deal with non-E-commerce Web databases.
ACKNOWLEDGMENTS
This research was supported by the Research Grants Council of
Hong Kong under grant HKUST6172/04E.
REFERENCES
[1] A. Aboulnaga and S. Chaudhuri. "Self-tuning Histograms:
Building Histograms Without Looking at Data," Proc. of the
ACM SIGMOD Conf., 181-192, 1999.
[2] S. Agrawal, S. Chaudhuri and G. Das. "DBXplorer: A System
for Keyword Based Search over Relational Databases," Proc.
of 18
th
Intl. Conf. on Data Engineering, 5-16, 2002.
[3] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information
Retrieval, Addison-Wesley, 1999.
[4] A. Balmin, V. Hristidis and Y. Papakonstantinou.
"ObjectRank: Authority-Based Keyword Search in
Databases," Proc. of the 30
th
Intl. Conf. on Very Large
Databases, 564-575, 2004.
[5] G. Bhalotia, C. Nakhe, A. Hulgeri, S. Chakrabarti and S.
Sudarshan. "Keyword Searching and Browsing in Databases
using BANKS," Proc. of 18
th
Intl. Conf. on Data Engineering,
431-440, 2002.
[6] N. Bruno, S. Chaudhuri and L. Gravano. "STHoles: A
Multidimensional Workload-aware Histogram," Proc. of the
ACM SIGMOD Conf., 211-222, 2001.
[7] K. Chakrabarti, S. Chaudhuri and S. Hwang. "Automatic
Categorization of Query Results," Proc. of the ACM SIGMOD
Conf., 755-766, 2004.
[8] S. Chaudhuri, G. Das, V. Hristidis and G. Weikum.
"Probabilistic Ranking of Database Query Results," Proc. of
the Intl. Conf. on Very Large Databases, 888-899, 2004.
[9] K. Chakrabarti, K. Porkaew and S. Mehrotra. "Efficient Query
Refinement in Multimedia Databases," Proc. of 16
th
Intl. Conf.
on Data Engineering, 196, 2000.
[10] W. Cohen. "Integration of Heterogeneous Databases Without
Common Domains Using Queries Based on Textual
Similarity," Proc. of the ACM SIGMOD Conf., 201-212, 1998.
[11] W. Cohen. "Providing Database-like Access to the Web Using
Queries Based on Textual Similarity," Proc. of the ACM
SIGMOD Conf., 558-560,1998.
[12] W.B. Croft and J. Lafferty. Language Modeling for
Information Retrieval. Kluwer 2003.
[13] R.O. Duda, P.E. Hart and D.G. Stork, Pattern Classification.
John Wiley & Sons, USA, 2001.
[14] N. Fuhr. "A Probabilistic Framework for Vague Queries and
Imprecise Information in Databases," Proc. of the 16
th
Intl.
Conf. on Very Large Databases, 696-707, 1990.
583
[15] N. Fuhr. "A Probabilistic Relational Model for the Integration
of IR and Databases," Proc. of the ACM SIGIR Conf., 309-317,
1993.
[16] F. Geerts, H. Mannila and E. Terzi. "Relational Link-based
Ranking," Proc. of the 30
th
Intl. Conf. on Very Large
Databases, 552-563, 2004.
[17] V. Hristidis and Y. Papakonstantinou. "DISCOVER: Keyword
Search in Relational Databases," Proc. of the 28
th
Intl. Conf. on
Very Large Databases, 670-681, 2002.
[18] G. Koutrika and Y.E. Ioannidis. "Personalization of Queries in
Database Systems," Proc. of 20
th
Intl. Conf. on Data
Engineering, 597-608, 2004.
[19] G. Koutrika and Y.E. Ioannidis. "Constrained Optimalities in
Query Personalization," Proc. of the ACM SIGMOD Conf., 73-84
, 2005.
[20] Y.E. Ioannidis. "The History of Histograms (abridged)," Proc.
of the 29
th
Intl. Conf. on Very Large Databases, 19-30, 2003.
[21] W. Kieling. "Foundations of Preferences in Database
Systems," Proc. of the 28
th
Intl. Conf. on Very Large
Databases, 311-322, 2002.
[22] R. Kooi. The Optimization of Queries in Relational Databases.
PhD Thesis, Case Western Reserve University, 1980.
[23] I. Muslea and T. Lee. "Online Query Relaxation via Bayesian
Causal Structures Discovery," Proc. of the AAAI Conf., 831-836
, 2005.
[24] U. Nambiar and S. Kambhampati. "Answering Imprecise
Queries over Autonomous Web Databases," Proc. of 22
nd
Intl.
Conf. on Data Engineering, 45, 2006.
[25] Z. Nazeri, E. Bloedorn and P. Ostwald. "Experiences in
Mining Aviation Safety Data," Proc. of the ACM SIGMOD
Conf., 562-566, 2001.
[26] M. Ortega-Binderberger, K. Chakrabarti and S. Mehrotra. "An
Approach to Integrating Query Refinement in SQL," Proc. Intl.
Conf. on Extending Data Base Technology, 15-33, 2002.
[27] G. Piatetsky-Sharpiro and C. Connell. "Accurate Estimation of
the Number of Tuples Satisfying a Condition," Proc. of the
ACM SIGMOD Conf., 256--276, 1984.
[28] Y. Rui, T.S. Huang and S. Merhotra. "Content-Based Image
Retrieval with Relevance Feedback in MARS," Proc. IEEE
Intl. Conf. on Image Processing, 815-818, 1997.
[29] G. Salton, A. Wong and C.S. Yang. "A Vector Space Model
for Information Retrieval," Communications of the ACM
18(11), 613-620, 1975.
[30] K. Sparck Jones, S. Walker and S.E. Robertson. "A
Probabilistic Model of Information Retrieval: Development
and Comparative Experiments - Part 1," Inf. Process.
Management 36(6), 779-808, 2000.
[31] K. Sparck Jones, S. Walker and S.E. Robertson. "A
Probabilistic Model of Information Retrieval: Development
and Comparative Experiments - Part 2," Inf. Process.
Management 36(6), 809-840, 2000.
[32] E.M. Voorhees. "The TREC-8 Question Answering Track
Report," Proc. of the 8
th
Text Retrieval Conf, 1999.
[33] L. Wu, C. Faloutsos, K. Sycara and T. Payne. "FALCON:
Feedback Adaptive Loop for Content-Based Retrieval," Proc.
of the 26
th
Intl. Conf. on Very Large Databases, 297-306, 2000.
584 | many query result problem;rank the query results;query result ranking;QRRE;algorithms;experimentation;attribute value;Attribute weight assignment;Query result ranking;attribute preference;design;PIR;e-commerce web databases;human factors;E-commerce |
161 | Query Type Classification for Web Document Retrieval | The heterogeneous Web exacerbates IR problems and short user queries make them worse. The contents of web documents are not enough to find good answer documents. Link information and URL information compensates for the insufficiencies of content information. However, static combination of multiple evidences may lower the retrieval performance . We need different strategies to find target documents according to a query type. We can classify user queries as three categories, the topic relevance task, the homepage finding task, and the service finding task. In this paper, a user query classification scheme is proposed. This scheme uses the difference of distribution, mutual information , the usage rate as anchor texts, and the POS information for the classification. After we classified a user query, we apply different algorithms and information for the better results. For the topic relevance task, we emphasize the content information, on the other hand, for the homepage finding task, we emphasize the Link information and the URL information. We could get the best performance when our proposed classification method with the OKAPI scoring algorithm was used. | INTRODUCTION
The Web is rich with various sources of information. It
contains the contents of documents, web directories, multi-media
data, user profiles and so on. The massive and heterogeneous
web document collections as well as the unpredictable
querying behaviors of typical web searchers exacerbate
Information Retrieval (IR) problems. Retrieval approaches
based on the single source of evidence suffer from
weakness that can hurt the retrieval performance in certain
situations [5]. For example, content-based IR approaches
have a difficulty in dealing with the diversity in vocabulary
and the quality of web documents, while link-based approaches
can suffer from an incomplete or noisy link structure
.
Combining multiple evidences compensates for the
weakness of a single evidence [17]. Fusion IR studies have
repeatedly shown that combining multiple sources of evidence
can improve retrieval performance [5][17].
However, previous studies did not consider a user query
in combining evidences [5][7][10][17]. Not only documents in
the Web but also users' queries are diverse. For example, for
user query `Mutual Information' , if we count on link information
too highly, well-known site that has `mutual funds'
and `information' as index terms gets the higher rank. For
user query `Britney's Fan Club' , if we use content information
too highly, yahoo or lycos's web directory pages get the
higher rank, instead of the Britney's fan club site. Like these
examples, combining content information and link information
is not always good. We have to use different strategies
to meet the need of a user. User queries can be classified as
three categories according to their intent [4].
topic relevance task (informational)
homepage finding task (navigational)
service finding task (transactional)
The topic relevance task is a traditional ad hoc retrieval task
where web documents are ranked by decreasing likelihood of
meeting the information need provided in a user query [8].
For example, `What is a prime factor?' or `prime factor' is
a query of the topic relevance task. The goal of this query is
finding the meaning of `prime factor'. The homepage finding
task is a known-item task where the goal is to find the
homepage (or site entry page) of the site described in a user
query. Users are interested in finding a certain site. For
example, `Where is the site of John Hopkins Medical Institutions
?' or `John Hopkins Medical Institutions' is a query
of the homepage finding task. The goal of this query is finding
the entry page of `John Hopkins Medical Institutions'.
The service finding task is a task where the goal is to find
64
web documents that provide the service described in a user
query. For example, `Where can I buy concert tickets?' or
`buy concert tickets' is a query of the service finding task.
The goal of this query is finding documents where they can
buy concert tickets.
Users may want different documents with the same query.
We cannot always tell the class of a query clearly. But we can
tell most people want a certain kind of documents with this
query. In this paper, we calculate the probability that the
class of a user query is the topic relevance task or the homepage
finding task. Based on this probability, we combine
multiple evidences dynamically. In this paper, we consider
the topic relevance task and the homepage finding task only.
Because the proposed method is based on the difference of
databases, we can apply the same method to classify the
service finding task.
In this paper, we present a user query classification method
and a combining method for each query type. In section 2,
we describe various types of information (Content, Link, and
URL information). Section 3 lists the differences of search
tasks and the properties of Content, Link, and URL information
. In section 4, we present the model of a query classification
. In section 5, we experiment with our proposed
model. Conclusion is described in section 6.
MULTIPLE SOURCES OF INFORMATION
In this section, we explain various sources of information
for the web document retrieval. There are three types of information
, Content information, Link information, and URL
information.
2.1
Content Information
There are multiple types of representations for a document
. These representations typically contain titles, anchor
texts, and main body texts [5]. A title provides the main
idea and the brief explanation of a web document. An anchor
text provides the description of linked web documents
and files. An anchor text often provides more accurate description
of a web document than the document itself.
We usually use tf and df to calculate the relevance of a
given web documents [1]. tf is the raw frequency of a given
term inside a document. It provides one measure of how
well that term describes the document contents. df is the
number of documents in which the index term appears. The
motivation for using an inverse document frequency is that
terms that appear in many documents are not very useful
for distinguishing a relevant document from a non-relevant
one. There are various scoring algorithms that use tf and
df . These scoring algorithms include the normalization and
the combination of each factor, tf and df .
2.2
Link Information
A hyperlink in a web document is a kind of citation. The
essential idea is that if page u has a link to page v, then the
author of u is implicitly assigning some importance to page
v. Since we can represent the Web as a graph, we can use
graph theories to help us make a search engine that returns
the most important pages first. The PageRank or P R(A) of
a page A is given as follows [13].
P R(A)
=
(1
- d) +
(1)
d(P R(T
1
)/C(T
1
) + . . . + P R(T
n
)/C(T
n
))
We assume page A has pages T
1
. . . T
n
that point to it. The
parameter d is a damping factor that can be set between 0
and 1. Also C(A) is defined as the number of links going out
of a page A. P R(A) can be calculated using a simple iterative
algorithm, and corresponds to the principal eigenvector
of the normalized link matrix of the Web [3].
2.3
URL Information
The URL string of a site entry page often contains the
name or acronym of the corresponding organization. Therefore
, an obvious way of exploiting URL information is trying
to match query terms and URL terms. Additionally, URLs
of site entry pages tend to be higher in a server's directory
tree than other web documents, i.e. the number of slashes
(`/') in an entry page URL tends to be relatively small.
Kraaij et al. suggested 4 types of URLs [16].
root: a domain name
(e.g. http://trec.nist.gov)
subroot: a domain name followed by a single directory
(e.g. http://trec.nist.gov/pubs/)
path: a domain name followed by an arbitrarily deep
path
(e.g. http://trec.nist.gov/pubs/trec9/papers)
file: anything ending in a filename other than `in-dex
.html'
(e.g. http://trec.nist.gov/pubs/trec9/t9proc.html)
Kraaij et al. estimated a prior probability (URLprior ) of
being an entry page on the basis of the URL type for all
URL types t (root, subroot, path, and file).
2.4
Combination of Information
We can combine results of each search engine or scores of
each measure to get better results. Croft proposed the IN-QUERY
retrieval system, based on the inference network,
to combine multiple evidences [5]. The inference network
model is a general model for combining information. It is
data-level fusion. The model is based on probabilistic updating
of the values of nodes in the network, and many retrieval
techniques and information can be implemented by config-uring
the network properly.
Several researchers have experimented with linearly combining
the normalized relevance scores (s
i
) given to each
document [7][10][16].
score(d) =
i
i
s
i
(d)
(2)
It requires training for the weight
i
given to each input
system. For example, we can get a better result by combining
content information and URL type information with the
following weight [16].
score(d) = 0.7
content + 0.3 URLprior
(3)
TOPIC RELEVANCE TASK AND HOMEPAGE FINDING TASK
In this section, we show properties of Content information,
Link information, and URL information in each search task.
Besides, we will propose the method for linearly combining
information for each task.
65
We use TREC data collection, to show the differences of
each search task. We made a simple search engine that use
the variation of the OKAPI scoring function [15]. Given a
query Q, the scoring formula is:
score
=
t
(QD
d
)
T F
d,t
IDF
t
(4)
T F
d,t
=
0.4 + 0.6
tf
d,t
tf
d,t
+ 0.5 + 1.5
doclen
d
avg doclen
(5)
IDF
t
=
log( N + 0.5
df
t
)/log(N + 1)
(6)
N is the number of documents in the collection. tf
d,t
is the
number of occurrences of an index term t in a document d,
and df
t
is the number of documents in which t occurs.
We use the data for the web track, the 10-gigabyte WT10g
collection [2], distributed by CSIRO [6]. We use TREC-2001
topic relevance task queries (topics 501-550) for the topic relevance
task, and 145 queries for the homepage finding task
[8]. For the homepage finding task, NIST found a homepage
within WT10g and then composed a query designed to locate
it.
We used the anchor text representation (Anchor) and the
common content text representation (Common) for indexing
. Every document in the anchor text representation has
anchor texts and the title as content, and excludes a body
text. Consequently the anchor text representation has brief
or main explanations of a document. We used two other evidences
for a scoring function besides the OKAPI score. One
is URLprior for URL information and the other is PageRank
for Link information. We linearly interpolated Content
information (OKAPI score), URLprior, and PageRank. We
call this interpolation as CMB .
rel(d)
=
0.65
Content Information +
(7)
0.25
URL Information +
0.1
Link Information
We used `and' and `sum' operators for matching query
terms [1]. `and' operator means that the result document
has all query terms in it. `sum' operator means that a result
document has at least one query term in it.
Table 1 shows the average precision of the topic relevance
task and the MRR of the homepage finding task [8]. The
first column in the table 1 means the method that we used
for indexing and scoring. For example, `Anchor and CMB'
means that we used the anchor text representation for indexing
, `and' operator for query matching, and the OKAPI
score, PageRank and URLprior for scoring.
The average
precision is defined as the average of the precision obtained
at the rank of each relevant document.
P
avg
= 1
|R|
d
R
R
r(d)
r(d)
(8)
R is the set of all relevant documents and R
r(d)
is the
set of relevant documents with rank r(d) or better. MRR
(Mean Reciprocal Rank) is the main evaluation measure for
the homepage finding task. MRR is based on the rank of
the first correct document (answer
i
rank) according to the
Table 1: Topic Relevance Task vs. Homepage Finding
Task
Topic
Homepage
model
P
avg
MRR
Anchor and
0.031
0.297
Anchor and CMB
0.031
0.431
Anchor sum
0.034
0.351
Anchor sum CMB
0.034
0.583
Common and
0.131
0.294
Common and CMB
0.122
0.580
Common sum
0.182
0.355
Common sum CMB
0.169
0.673
MAX
0.226
0.774
AVG
0.145
0.432
following formula:
M RR =
1
#queries
#queries
i
=1
1
answer
i
rank
(9)
M AX represents the best score of a search engine that submitted
in TREC-2001. AV G represents the average score of
all search engines that submitted in TREC-2001.
We got the better result with the common content text
representation than the anchor text representation in the
topic relevance task. A title and anchor texts do not have
enough information for the topic relevance task.
On the
other hand, we could get the similar performance with the
anchor text representation in the homepage finding task.
URL information and Link information are good for the
homepage finding task but bad for the topic relevance task.
In the topic relevance task, we lost our performance by combining
URL and Link information.
The query of the topic relevance task usually consists of
main keywords that are relevant to some concept or the explanation
of what they want to know. However, we cannot
assume that other people use same expressions and keywords
to explain what a user wants to know. Therefore we could
not get a good result with `and' operator in the topic relevance
task. But on the other hand the query of the homepage
finding task consists of entity names or proper nouns.
Therefore we could have good results with `and' operator
when we can have a result document. However, the MRR
of `Anchor and CMB' is lower than that of `Common sum
CMB' in the homepage finding task. `Anchor and CMB'
method did not retrieve a document for 31 queries. To compensate
for this sparseness problem, we combined the results
of `Anchor and CMB' and `Common sum CMB' . This
combined result showed 0.730 in the homepage finding task.
When we combined the results of `Anchor and ' and `Common
sum' , it showed 0.173 in the topic relevance task. This
implies that the result documents with `and' operator are
good and useful in the homepage finding task.
We can conclude that we need different retrieval strategies
according to the category of a query. We have to use the field
information (title, body, and anchor text) of each term, and
combine evidences dynamically to get good results. In the
topic relevance task, the body text of a document is good for
indexing, `sum' operator is good for query term matching,
66
and combining URL and Link information are useless. On
the other hand, in the homepage finding task, anchor texts
and titles are useful for indexing, `and' operator is also good
for query term matching, and URL and Link information
is useful. By combining results from main body text and
anchor texts and titles we can have the better performance.
USER QUERY CLASSIFICATION
In this section, we present the method for making a language
model for a user query classification.
4.1
Preparation for Language Model
We may use the question type of a query to classify the
category of a user query. For example, "What is a two electrode
vacuum tube?" is a query of the topic relevance task.
"Where is the site of SONY?" is a query of the homepage
finding task. We can assume the category of a query with an
interrogative pronoun and cue expressions (e.g. `the site of').
However, people do not provide natural language queries to
a search engine. They usually use keywords for their queries.
It is not easy to anticipate natural language queries. In this
paper, we assume that users provide only main keywords for
their queries.
We define a query Q as the set of words.
Q =
{w
1
, w
2
, . . . , w
n
}
(10)
To see the characteristics of each query class, we use two
query sets. For the topic relevance task, TREC-2000 topic
relevance task queries (topics 451-500) are used. For the
homepage finding task, queries for randomly selected 100
homepages
1
are used. We call them QU ERY
T
-T RAIN
and
QU ERY
H
-T RAIN
.
We divided WT10g into two sets, DB
T OP IC
and DB
HOM E
.
If the URL type of a document is `root' type, we put this
document to DB
HOM E
. Others are added to DB
T OP IC
.
According to the report of [16], our division method can
get site entry pages with 71.7% precision. Additionally we
put virtual documents into DB
HOM E
with anchor texts. If
a linked document is in DB
T OP IC
, then we make a virtual
document that consists of anchor texts and put it into
DB
HOM E
. If a linked document is in DB
HOM E
, then we
add anchor texts to the original document. Usually a site entry
page does not have many words. It is not an explanatory
document for some topic or concept, but the brief explanation
of a site. We can assume that site entry pages have the
different usage of words. If we find distinctive features for
site entry pages, then we can discriminate the category of a
given query.
#DB
T OP IC
and #DB
HOM E
mean the number of documents
in the DB
T OP IC
and DB
HOM E
respectively. However
, most documents in the DB
HOM E
have a short length,
we normalized the number of documents with the following
equation.
#DB
T OP IC
=
# of documents in DB
T OP IC
(11)
#DB
HOM E
=
# of documents in DB
HOM E
(12)
avg doclength
HOM E
avg doclength
T OP IC
1
available at http://www.ted.cmis.csiro.au/TRECWeb/Qrels/
4.2
Distribution of Query Terms
`Earthquake' occurs more frequently in DB
T OP IC
. But
`Hunt Memorial Library' shows the high relative frequency
in DB
HOM E
. General terms tend to have same distribution
regardless of the database. If the difference of distribution
is larger than expected, this tells whether a given query is in
the topic relevance task class or the homepage finding task
class. We can calculate the occurrence ratio of a query with
the following equation [11].
Dist(w
1
, . . . , w
n
) = n C(w
1
, . . . , w
n
)
n
i
=1
C(w
i
)
(13)
C(w) is the number of documents that have w as an index
term. df of w is used for C(w). C(w
1
, . . . , w
n
) is the number
of documents that have all w
1
, . . . , w
n
as index terms. To see
the distribution difference of a query, we use the following
ratio equation.
dif f
Dist
(Q) = Dist
HOM E
(Q)
Dist
T OP IC
(Q)
(14)
If a query has only one term, we use the chi-square [11].
We make a 2-by-2 table for the given word `w'.
word=w
word = w
DB
T OP IC
a
b
DB
HOM E
c
d
a + b = #DB
T OP IC
and c + d = #DB
HOM E
. `a' is the
frequency of the word `w' in the DB
T OP IC
and `c' is the
frequency of the word `w' in the DB
HOM E
. The chi-square
value shows the dependence of the word `w' and DB. If the
chi-square value of the word `w' is high, then `w' is a special
term of DB
T OP IC
or DB
HOM E
. We classify these words
that have a high chi-square value according to the df . If `w'
has a high df then the word `w' is the topic relevance task
query. Otherwise `w' is the homepage finding task query.
For example, `f ast' shows the high chi-square value, since
it is used a lot to modify proper names. However, one word
`f ast' is not the proper name. We classify a word that has a
high chi-square and a high df into the topic relevance task.
If the chi-square value of the word `w' is low, then `w' is a
general term.
Fig.2 shows the results of dif f
Dist
of queries that have at
least two query terms. The mean values of QU ERY
T
-T RAIN
's
dif f
Dist
and QU ERY
H
-T RAIN
's dif f
Dist
are 0.5138 and
1.1 respectively. As the value of dif f
Dist
of a given query
is higher, we can have confidence that the query has special
terms. On the other hand, if the score of dif f
Dist
is near
the mean value of QU ERY
T
-T RAIN
, it means the query
has general terms, not a special expression. We calculate
the possibility that a given query is in each class with the
mean value and the standard deviation. However, there are
queries that show high dif f
DIST
in QU ERY
T
-T RAIN
. For
example, `Jenniffer Aniston' and `Chevrolet Trucks' showed
2.04 and 0.76 respectively. Usually proper names showed
high dif f
DIST
values. If a proper name is frequently used
in the DB
HOM E
, then we can think of it as the name of the
site.
4.3
Mutual Information
There are two or more words that co-occur frequently.
These words may have syntactic or semantic relations to
67
if length(Q)=1 then
calculate the
2
of Q
if
2
> 18 then
if df of a query > 65
the topic relevance task
else
the homepage finding task
else
the topic relevance task
else
calculate distributions of a query in each database
calculate dif f Dist(Q)
if dif f Dist(Q) >
the homepage finding task
else
unknown
Figure 1: The Algorithm of Distribution Difference
Method
0%
10%
20%
30%
40%
50%
60%
70%
0.1
0.3
0.5
0.7
0.9
1.1
1.3
1.5
Ratio of Distribution Difference
P
e
r
c
e
n
t
a
g
e
o
f
O
b
se
r
v
at
i
o
n
s
QUERY-TOPIC-TRAIN
QUERY-HOMEPAGE-TRAIN
Figure 2: Distribution of Queries
each other. We say these words have some dependency. For
example, `tornadoes formed' shows similar dependency regardless
of the database. But `Fan Club' has a high dependency
in DB
HOM E
set. This means that `tornadoes formed'
is a general usage of words but `Fan Club' is a special usage
in DB
HOM E
. Therefore, the dependency of `Fan Club' can
be the key clue of guessing the category of a user query. If
the difference of dependency of each term is larger than expected
, this tells whether a given query is the topic relevance
task or the homepage finding task. For two variables A and
B, we can calculate the dependency with mutual information
, I(A; B) [9]. We use the pointwise mutual information
I(x, y) to calculate the dependency of terms in a query [11].
I(A; B)
=
H(A) + H(B)
- H(A, B)
=
a,b
p(a, b)log p(a, b)
p(a)p(b)
(15)
I(x, y) = log p(x, y)
p(x)p(y)
(16)
We extend pointwise mutual information for three variables
. We use the set theory to calculate the value of an
0%
5%
10%
15%
20%
25%
30%
0.25
0.75
1.25
1.75
2.25
2.75
3.25
3.75
4.25
4.75
5.25
Ratio of MI Difference
P
e
r
c
e
n
t
a
g
e
o
f
O
b
s
e
r
v
a
t
i
o
n
QUERY-TOPIC-TRAIN
QUERY-HOMEPAGE-TRAIN
Figure 3: Mutual Information of Queries
intersection part, like two variables problem.
I(A; B; C)
=
H(A, B, C)
- H(A) - H(B) - H(C) +
I(A; B) + I(B; C) + I(C; A)
=
a,b,c
p(a, b, c)log p(a, b)p(b, c)p(c, a)
p(a, b, c)p(a)p(b)p(c) (17)
I(x, y, z) = log p(x, y)p(y, z)p(z, x)
p(x, y, z)p(x)p(y)p(z)
(18)
In principle, p(x, y) means the probability that x and y are
co-occurred in a specific distance [11]. Usually x and y are
consecutive words. Since the number of words and documents
are so huge in IR domain, it is not easy to keep statistics
. Our measure assume that x and y are co-occurred in a
document. We use df of a given term to calculate the number
of documents that contain a term. Like the distribution
difference measure, we use the ratio difference equation to
see the difference of MI. If pointwise mutual information is
below zero then we use zero.
dif f
M I
(Q) = M I
HOM E
(Q)
M I
T OP IC
(Q)
(19)
Fig.3 shows the results of dif f
M I
. The mean values of
QU ERY
T
-T RAIN
's dif f
M I
and QU ERY
H
-T RAIN
's dif f
M I
are 1.9 and 2.7 respectively. For example, the topic relevance
task query `mexican food culture' showed 1.0, but the
homepage finding task query `Newave IFMO' showed 7.5.
QU ERY
H
-T RAIN
gets a slightly high standard deviation.
It means that the query of QU ERY
H
-T RAIN
has different
MI in DB
HOM E
. As the value of dif f
M I
is higher, we can
have confidence that the query has a special dependency.
We calculate the possibility that a given query is in each
class with the mean value and the standard deviation.
4.4
Usage Rate as an Anchor Text
If query terms appear in titles and anchor texts frequently,
this tells the category of a given query is the homepage finding
task. Titles and anchor texts are usually entity names
or proper nouns, the usage rate shows the probability that
given terms are special terms.
use
Anchor
(w
1
, . . . , w
n
) =
(20)
C
SIT E AN CHOR
(w
1
, . . . , w
n
)
- C
SIT E
(w
1
, . . . , w
n
)
C
SIT E
(w
1
, w
2
, . . . , w
n
)
68
C
SIT E
(w) means the number of site entry documents that
have w as an index term. C
SIT E AN CHOR
(w) means the
number of site entry documents and anchor texts that have
w as an index term.
4.5
POS information
Since the homepage finding task queries are proper names,
they do not usually contain a verb. However, some topic relevance
task queries include a verb to explain what he or she
wants to know. For example, `How are tornadoes formed?'
or briefly `tornadoes formed' contain a verb `formed'. If a
query has a verb except the `be' verb, then we classified it
into the topic relevance task.
4.6
Combination of Measures
The difference of distribution method can apply more
queries than the difference of MI. The usage rate as anchor
texts and the POS information show small coverage. However
, four measures cover different queries. Therefore, we
can have more confidence and more coverage by combining
these measures. We use a different combination equation as
the number of query terms. If the query has 2 and 3 terms
in it, we use pointwise mutual information also.
S(Q)
=
diff
Dist
(Q) +
diff
M I
(Q) + (21)
use
Anchor
(Q) +
P OS
inf o
(Q)
We choose , , , and with train data (QU ERY
T
-T RAIN
and QU ERY
H
-T RAIN
). If `S(Q)' score is not high or low
enough, then we make no decision.
EXPERIMENTS
In this section, we show the efficiency of a user query
classification.
5.1
Query Classification
We used four query sets for experimenting our query classification
method. QU ERY
T
-T RAIN
and QU ERY
H
-T RAIN
are used for training (TRAIN). TREC-2001 topic relevance
task queries (Topic 501-550) and TREC-2001 homepage finding
task queries (1-145) are used for testing (TEST). We call
two test sets as QU ERY
T
-T EST
and QU ERY
H
-T EST
. We
used WT10g for making a classification model.
We classified queries with our proposed method. If the
score `S(Q)' is high enough to tell that a given query is
in the topic relevance task or the homepage finding task
query, then we assigned the query type to it.
For other
cases, we did not classify a query category. Table 2 shows
the classification result of our proposed language model.
Table 2: Query Classification Result
QUERY
TRAIN
TEST
Measure
Precision
Recall
Precision
Recall
Dist.
77.3%
38.7%
82.1%
28.2%
MI
90.9%
20.0%
78.2%
29.9%
Anchor
73.6%
35.3%
82.4%
35.9%
POS
100%
9.3%
96.4%
13.8%
All
81.1%
57.3%
91.7%
61.5%
By combining each measure, we could apply our method
to more queries and increase precision and recall. Our pro-Table
3: Average Precision of the Topic Relevance
Task
model
OKAPI
TF-IDF
KL DIR
MIXFB KL D
Lemur
0.182
0.170
0.210
0.219
MLemur
0.169
0.159
0.200
0.209
Table 4: MRR of the Homepage Finding Task
model
OKAPI
TF-IDF
KL DIR
MIXFB KL D
Lemur
0.355
0.340
0.181
0.144
MLemur
0.673
0.640
0.447
0.360
posed method shows the better result in the test set. This
is due to the characteristics of the query set. There are 7
queries that have a verb in QU ERY
T
-T RAIN
and 28 queries
in QU ERY
T
-T EST
. We can assume that the POS information
is good information.
The main reason of misclassification is wrong division of
WT10g. Since our method usually gives the high score to the
proper name, we need correct information to distinguish a
proper name from a site name. We tried to make DB
HOM E
automatically. However, some root pages are not site entry
pages. We need a more sophisticated division method.
There is a case that a verb is in the homepage finding
task query. `Protect & Preserve' is the homepage finding
task query but `protect' and `preserve' are verbs. However,
`Protect' and `Preserve' start with a capital letter. We can
correct wrong POS tags.
There are queries in QU ERY
T
-T EST
that look like queries
of QU ERY
H
-T EST
. For example, `Dodge Recalls' is used to
find documents that report on the recall of any dodge automobile
products. But user may want to find the entry page
of `Dodge recall'. This is due to the use of main keywords
instead of a natural language query.
There are 6 queries in QU ERY
T
-T EST
and 6 queries in
QU ERY
H
-T EST
that do not have a result document that
has all query terms in it. We could not use our method to
them. WT10g is not enough to extract probability information
for these two query sets. To make up this sparseness
problem, we need a different indexing terms extraction module
. We have to consider special parsing technique for URL
strings and acronyms in a document. Also we need a query
expansion technique to get a better result.
5.2
The Improvement of IR Performance
We used the Lemur Toolkit [12] to make a general search
engine for the topic relevance task. The Lemur Toolkit is an
information retrieval toolkit designed with language modeling
in mind. The Lemur Toolkit supports several retrieval
algorithms. These algorithms include a dot-product function
using TF-IDF weighting algorithm, the Kullback-Leibler
(KL) divergence algorithm, the OKAPI retrieval algorithm,
the feedback retrieval algorithm and the mixture model of
Dirichlet smoothing, MIXFB KL D [14]. For the homepage
finding task, we add the URLprior probability of a URL
string to the Lemur Toolkit. Besides Link information, we
add the PageRank of a document. We normalized PageRank
values, so the max value is 100 and the min value is 0.
First we extracted top 1,000 results with the Lemur Toolkit.
69
Table 5: The Retrieval Performance with Classification Method
OKAPI
TF-IDF
MIXFB KL D
Measure
DEFAULT
TOPIC
HOME
TOPIC
HOME
TOPIC
HOME
Dist.
TOPIC
0.178
0.469
0.168
0.447
0.216
0.226
Dist.
HOME
0.174
0.666
0.164
0.633
0.212
0.359
MI
TOPIC
0.179
0.465
0.168
0.445
0.218
0.233
MI
HOME
0.169
0.673
0.159
0.640
0.209
0.360
Anchor
TOPIC
0.176
0.513
0.165
0.489
0.215
0.232
Anchor
HOME
0.169
0.666
0.159
0.633
0.209
0.359
POS
TOPIC
0.182
0.355
0.170
0.340
0.219
0.144
POS
HOME
0.173
0.673
0.163
0.640
0.212
0.354
All
TOPIC
0.180
0.552
0.168
0.528
0.217
0.280
All
HOME
0.173
0.666
0.163
0.633
0.212
0.353
Then we combined URL information and Link information
to reorder results with the equation Eq. 7. We presented
top 1,000 documents as the answer in the topic relevance
task, and 100 documents in the homepage finding task. We
call this modified Toolkit as MLemur Toolkit.
Table 3 and 4 show results of the topic relevance task and
the homepage finding task that use the Lemur Toolkit and
the MLemur Toolkit. MIXFB KL D showed the good result
in the topic relevance task but showed the poor result in the
homepage finding task. We can say that a good information
retrieval algorithm for the topic relevance task is not always
good for the homepage finding task. We chose three algorithms
, the OKAPI , the TF-IDF , and the MIXFB KL D
that got the best and worst score in each task, for the test
of performance improvement by query type classification.
Table 5 shows the change of performance. `DEFAULT'
means the default category for an unclassified query. Digits
in the TOPIC column and the HOME column are average
precision and MRR respectively. From the result, the
OKAPI algorithm and the homepage finding task as a default
class method shows the good performance.
5.3
Discussion
To classify a query type, we need the document frequency
of a query term in each database. This lowers the system efficiency
. However, we may create two databases as proposed
in this paper for indexing. We retrieve two result document
sets from each database and classify a query type at the same
time. And then according to the category of a query, merge
two results. From table 1, merging the results of the anchor
text representation and the common content representation
shows good performance. We need more work to unify the
query classification work and the document retrieval.
In this paper, we proposed a user query classification
method for the topic relevance task and the homepage finding
task. The queries of the homepage finding task usually
consist of entity names or proper nouns. However queries of
the service finding task have verbs for the service definition.
For example, "Where can I buy concert tickets?" has `buy'
as the service definition. To find these cue expressions, we
need more sophisticated analysis of anchor texts. Since the
service in the Web is provided as a program, there is a trigger
button. Mostly these trigger buttons are explained by
anchor texts. We have to distinguish an entity name and an
action verb from anchor texts. We have to change measures
for the query classification from a word unit to entity and
action units.
User query classification can be applied to various areas.
MetaSearch is the search algorithm that combines results of
each search engine to get the better result [7]. [10] proposed
CombMNZ, Multiply by NonZeros, is better than other scoring
algorithm, CombSUM , Summed similarity over systems.
But if we consider the homepage finding task, we are in a
different situation.
Table 6 and 7 show the improvement of performance of
MetaSearch algorithms. We had an experiment with random
samplings of 2, 3, 4, and 5 engine results. The score is
the average improvement of 100 tests. CombMNZ was good
for the topic relevance task, but CombSUM was good for
the homepage finding task. It also tells, we need different
strategies for MetaSearch as the class of a query.
Table 6: Performance of MetaSearch in the Topic
Relevance Task
engine #
2
3
4
5
CombSUM
-2.4%
4.4%
3.7%
4.8%
CombMNZ
-1.2%
5.7%
5.3%
5.8%
Table 7: Performance of Metasearch in the Homepage
Finding Task
engine #
2
3
4
5
CombSUM
-4.5%
0.7%
-0.9%
0.8%
CombMNZ
-6.0%
-0.4%
-4.5%
-2.4%
CONCLUSIONS
We have various forms of resources in the Web, and consequently
purposes of user queries are diverse. We can classify
user queries as three categories, the topic relevance task,
the homepage finding task, and the service finding task.
Search engines need different strategies to meet the purpose
of a user query. For example, URL information and
Link information are bad for the topic relevance task, but
on the other hand, they are good for the homepage finding
task. We made two representative databases, DB
HOM E
70
and DB
T OP IC
, for each task. To make databases, we divided
text collection by the URL type of a web document.
If the URL of a document contains a host name only, then
we put it into DB
HOM E
. Also we make a virtual document
with an anchor text and put it into DB
HOM E
. Other
documents are put into DB
T OP IC
. If given query's distributions
in DB
HOM E
and DB
T OP IC
are different, then this
tells a given query is not a general word. Therefore, we
can assume the category of a given query is in the homepage
finding task. Likewise, the difference of dependency,
Mutual Information, and the usage rate as anchor texts tell
whether a given query is in the homepage finding task or
not. We tested the proposed classification method with two
query sets, QU ERY
T
-T EST
and QU ERY
H
-T EST
. The usage
rate as anchor texts and the POS information show small
coverage. On the other hand, distribution difference and
dependency showed good precision and coverage. Also each
classifier applied to different queries. We could get the better
precision and recall by combining each classifier. We got
91.7% precision and 61.5% recall. After we classified the category
of a query, we used different information for a search
engine. For the topic relevance task, Content information
such as TFIDF is used. For the homepage finding task, Link
information and URL information besides content information
are used. We tested our dynamic combining method.
From the result, our classification method showed the best
result with the OKAPI scoring algorithm.
ACKNOWLEDGMENTS
We would like to thank Jamie Callan for providing useful
experiment data and the Lemur toolkit.
REFERENCES
[1] R. Baeza-Yates and B. Ribeiro-Neto. Modern
Information Retrieval. ACM PRESS BOOKS, 1999.
[2] P. Bailey, N. Craswell, and D. Hawking. Engineering a
multi-purpose test collection for web retrieval
experiments. Information Processing and
Management, to appear.
[3] S. Brin and L. Page. The anatomy of a large-scale
hypertextual Web search engine. Computer Networks
and ISDN Systems, 30(1-7):107117, 1998.
[4] A. Broder. A taxonomy of web search. SIGIR Forum,
36(2), 2002.
[5] W. B. Croft. Combining approaches to information
retrieval. In Advances in Information Retrieval:
Recent Research from the Center for Intelligent
Information Retrieval, pages 136. Kluwer Academic
Publishers, 2000.
[6] CSIRO. Web research collections - trec web track.
www.ted.cmis.csiro.au /TRECWeb/, 2001.
[7] E. Fox and J. Shaw. Combination of multiple searches.
In Text REtrieval Conference (TREC-1), pages
243252, 1993.
[8] D. Hawking and N. Craswell. Overview of the
trec-2001 web track. In Text REtrieval Conference
(TREC-10), pages 6167, 2001.
[9] E. Jaynes. Information theory and statistical
mechanics. Physics Review, 106(4):620630, 1957.
[10] J. H. Lee. Analyses of multiple evidence combination.
In Proceedings of the 20th Annual International ACM
SIGIR Conference on Research and Development in
Information Retrieval, pages 267276, 1997.
[11] C. D. Manning and H. Schutze. Foundations of
Statistical Natural Language Processing. The MIT
Press, 1999.
[12] P. Ogilvie and J. Callan. Experiments using the lemur
toolkit. In Text REtrieval Conference (TREC-10)
http://www-2.cs.cmu.edu/ lemur, pages 103108, 2001.
[13] L. Page, S. Brin, R. Motwani, and T. Winograd. The
pagerank citation ranking: Bringing order to the web.
Technical report, Stanford Digital Library
Technologies Project, 1998.
[14] J. M. Ponte. Language models for relevance feedback.
In W. B. Croft, editor, Advances in Information
Retrieval: Recent Research from the Center for
Intelligent Information Retrieval, pages 7395. Kluwer
Academic Publishers, 2000.
[15] S. E. Robertson, S. Walker, S. Jones,
M. Hancock-Beaulieu, and M. Gatford. Okapi at
trec-3. In Text REtrieval Conference (TREC-2), pages
109126, 1994.
[16] T. Westerveld, W. Kraaij, and D. Hiemstra.
Retrieving web pages using content, links, urls and
anchors. In Text REtrieval Conference (TREC-10),
pages 663672, 2001.
[17] K. Yang. Combining text and link-based retrieval
methods for web ir. In Text REtrieval Conference
(TREC-10), pages 609618, 2001.
71
| URL Information;web document;URL;improvement;frequency;task;information;model;rate;IR;Combination of Multiple Evidences;Link Information;query;Query Classification |
162 | Querying Bi-level Information | In our research on superimposed information management, we have developed applications where information elements in the superimposed layer serve to annotate, comment, restructure, and combine selections from one or more existing documents in the base layer. Base documents tend to be unstructured or semi-structured (HTML pages, Excel spreadsheets, and so on) with marks delimiting selections. Selections in the base layer can be programmatically accessed via marks to retrieve content and context. The applications we have built to date allow creation of new marks and new superimposed elements (that use marks), but they have been browse-oriented and tend to expose the line between superimposed and base layers. Here, we present a new access capability, called bi-level queries, that allows an application or user to query over both layers as a whole. Bi-level queries provide an alternative style of data integration where only relevant portions of a base document are mediated (not the whole document) and the superimposed layer can add information not present in the base layer. We discuss our framework for superimposed information management, an initial implementation of a bi-level query system with an XML Query interface, and suggest mechanisms to improve scalability and performance. | INTRODUCTION
You are conducting background research for a paper you are
writing. You have found relevant information in a variety of
sources: HTML pages on the web, PDF documents on the web
and on your SIGMOD anthology of CDs, Excel spreadsheets and
Word documents from your past work in a related area, and so on.
You identify relevant portions of the documents and add
annotations with clarifications, questions, and conclusions. As
you collect information, you frequently reorganize the
information you have collected thus far (and your added
annotations) to reflect your perspective. You intentionally keep
your information structure loose so you can easily move things
around. When you have collected sufficient information, you
import it, along with your comments, in to a word-processor
document. As you write your paper in your word-processor, you
revisit your sources to see information in its context. Also, as you
write your paper you reorganize its contents, including the imported
information, to suit the flow. Occasionally, you search the
imported annotations, selections, and the context of the selections.
You mix some of the imported information with other information
in the paper and transform the mixture to suit presentation needs.
Most researchers will be familiar with manual approaches to the
scenario we have just described. Providing computer support for
this scenario requires a toolset with the following capabilities:
1. Select portions of documents of many kinds (PDF, HTML,
etc.) in many locations (web, CD, local file system, etc.), and
record the selections.
2. Create and associate annotations (of varying structure) with
document selections.
3. Group and link document selections and annotations,
reorganize them as needed, and possibly even maintain
multiple organizations.
4. See a document selection in its context by opening the
document and navigating to the selected region, or access the
context of a selection without launching its original document.
5. Place document selections and annotations in traditional documents
(such as the word-processor document that contains
your paper).
6. Search and transform a mixture of document selections,
annotations, and other information.
Systems that support some subset of these capabilities exist, but
no one system supports the complete set. It is hard to use a
collection of systems to get the full set of features because the
systems do not interoperate well. Some hypertext systems can
create multiple organizations of the same information, but they
tend to lack in the types of source, granularity of information, or
the location of information consulted. For example, Dexter [6]
requires all information consulted to be stored in its proprietary
database. Compound document systems can address sub-documents
, but they tend to have many display constraints. For
example, OLE 2 [9] relies on original applications to render
information. Neither type of system supports querying a mixture
of document selections and annotations.
Superimposed information management is an alternative solution
for organizing heterogeneous in situ information, at document and
sub-document granularity. Superimposed information (such as
annotations) refers to data placed over existing information
sources (base information) to help organize, access, connect and
reuse information elements in those sources [8]. In our previous
work [12], we have described the Superimposed Pluggable
Architecture for Contexts and Excerpts (SPARCE), a middleware
for superimposed information management, and presented some
superimposed applications built using SPARCE. Together they
support Capabilities 1 through 4. In this paper, we show how
SPARCE can be used to support Capability 6. Details of support
for Capability 5 are outside the scope of this paper.
Before we proceed with the details of how we support Capability
6, we introduce a superimposed application called RIDPad [12].
Figure 1 shows a RIDPad document that contains information
selections and annotations related to the topic of information
integration. The document shown contains eight items: CLIO,
Definition, SchemaSQL, Related Systems, Goal, Model, Query
Optimizer, and Press. These items are associated with six distinct
base documents of three kinds--PDF, Excel, and HTML. An item
has a name, a descriptive text, and a reference (called a mark) to a
selection in a base document. For example, the item labeled
`Goal' contains a mark into a PDF document. The boxes labeled
Schematic Heterogeneity and Garlic are groups. A group is a
named collection of items and other groups. A RIDPad document
is a collection of items and groups.
RIDPad affords many operations for items and groups. A user can
create new items and groups, and move items between groups.
The user can also rename, resize, and change visual
characteristics such as color and font for items and groups. With
the mark associated with an item, the user can navigate to the base
layer if necessary, or examine the mark's properties and browse
context information (such as containing paragraph) from within
RIDPad via a reusable Context Browser we have built.
The operations RIDPAD affords are at the level of items and
groups. However, we have seen the need to query and manipulate
a RIDPad document and its base documents as a whole. For
example, possible queries over the RIDPad document in Figure 1
include:
Q1: List base documents used in this RIDPad document.
Q2: Show abstracts of papers related to Garlic.
Q3: Create an HTML table of contents from the groups and items.
Query Q1 examines the paths to base documents of marks associated
with items in the RIDPad document. Q2 examines the
context of marks of items in the group labeled `Schematic
Heterogeneity.' Q3 transforms the contents of the RIDPad document
to another form (table of contents). In general, queries such
as these operate on both superimposed information and base
information. Consequently, we call them bi-level queries.
Figure 1: A RIDPad document.
There are many possible choices on how to present the contents of
superimposed documents (such as the RIDPad document in
Figure 1) and base documents for querying. We could make the
division between the superimposed and base documents obvious
and let the user explicitly follow marks from superimposed
information to base information. Instead, our approach is to
integrate a superimposed document's contents and related base
information to present a uniform representation of the integrated
information for querying.
The rest of this paper is organized as follows: Section 2 provides
an overview of SPARCE. Section 3 provides an overview of bi-level
query systems and describes a nave implementation of a bi-level
query system along with some example bi-level queries.
Section 4 discusses some applications and implementation
alternatives for bi-level query systems. Section 5 briefly reviews
related work. Section 6 summarizes the paper.
We use the RIDPad document in Figure 1 for all examples in this
paper.
SPARCE OVERVIEW
The Superimposed Pluggable Architecture for Contexts and
Excerpts (SPARCE) facilitates management of marks and context
information in the setting of superimposed information
management [12]. A mark is an abstraction of a selection in a
base document. Several mark implementations exist, typically one
per base type (PDF, HTML, Excel, and so on). A mark
implementation chooses an addressing scheme appropriate for the
base type it supports. For example, an MS Word mark
implementation uses the starting and ending character index of a
text selection, whereas an MS Excel mark uses the row and
column names of the first and last cell in the selection. All mark
implementations provide a common interface to address base
information, regardless of base type or access protocol they
8
support. A superimposed application can work uniformly with
any base type due to this common interface.
Context is information concerning a base-layer element. Presentation
information such as font name, containment information
such as enclosing paragraph and section, and placement
information such as line number are examples of context
information. An Excerpt is the content of a marked base-layer
element. (We treat an excerpt also as a context element.) Figure 2
shows the PDF mark corresponding to the item `Goal' (of the
RIDPad document in Figure 1) activated. The highlighted portion
is the marked region. Table 1 shows some of the context elements
for this mark.
Figure 2: A PDF mark activated.
Figure 3 shows the SPARCE architecture reference model. The
Mark Management module is responsible for operations on marks
(such as creating and storing marks). The Context Management
module retrieves context information. The Superimposed
Information Management module provides storage service to
superimposed applications. The Clipboard is used for inter-process
communication.
Table 1: Some context elements of a PDF mark.
Element name
Value
Excerpt
provide applications and users
with ... Garlic system
Font name
Times New Roman
Enclosing paragraph
Loosely speaking, the goal ...
Section Heading
Garlic Overview
SPARCE uses mediators [13] called context agents to interact
with different base types. A context agent is responsible for resolving
a mark and returning the set of context elements
appropriate to that mark. A context agent is different from mediators
used in other systems because it only mediates portions of
base document a mark refers to. For example, if a mark refers to
the first three lines of a PDF document, the mark's context agent
mediates those three lines and other regions immediately around
the lines. A user could retrieve broader context information for
this mark, but the agent will not do so by default.
Figure 3: SPARCE architecture reference model.
A superimposed application allows creation of information elements
(such as annotations) associated with marks. It can use an
information model of its choice (SPARCE does not impose a
model) and the model may vary from one application to another.
For example, RIDPad uses a group-item model (simple nesting),
whereas the Schematics Browser, another application we have
built, uses an ER model [2, 12]. The superimposed model may be
different from any of the base models. A detailed description of
SPARCE is available in our previous work [12].
BI-LEVEL QUERY SYSTEM
A bi-level query system allows a superimposed application and its
user to query the superimposed information and base information
as a whole. User queries are in a language appropriate to the
superimposed model. For example, XQuery may be the query
language if the superimposed model is XML (or a model that can
be mapped to XML), whereas SQL may be the query language if
superimposed information is in the relational model.
Figure 4: Overview of a bi-level query system.
Figure 4 provides an overview of a bi-level query system. An oval
in the figure represents an information source. A rectangle
denotes a process that manipulates information. Arrows indicate
data flow. The query processor accepts three kinds of
information--superimposed, mark, and context. Model transformers
transform information from the three sources in to
model(s) appropriate for querying. One of these transformers, the
context transformer, is responsible for transforming context information
. We restrict bi-level query systems to use only one
superimposed model at a time, for practical reasons. Choosing a
query language and the model for the result can be hard if
superimposed models are mixed.
Base Info 1
Base Info n
Context
Agents
Model Transformers
Mark
Info
Superimposed
Info
Query
Processor
Superimposed
Application
Superimposed
Information
Management
Mark
Management
Context
Management
Clipboard
Base
Application
Result
Query
9
3.1 Implementation
We have implemented a nave bi-level query system for the XML
superimposed model. We have developed a transformer to convert
RIDPad information to XML. We have developed a context
transformer to convert context information to XML. We are able
to use mark information without any transformation since
SPARCE already represents that information in XML. User queries
can be in XPath, XSLT, and XQuery. We use Microsoft's
XML SDK 4.0 [10] and XQuery demo implementation [11] to
process queries.
We use three XML elements to represent RIDPad information in
XML-<
;RIDPadDocument>
for the document,
<Group>
for
a group, and
<Item>
for an item. For each RIDPad item, the
system creates four children nodes in the corresponding
<Item>
element. These children nodes correspond to the mark, container
(base document where the mark is made), application, and
context. We currently transform the entire context of the mark.
The XML data is regenerated if the RIDPad document changes.
Figure 5: Partial XML data from a RIDPad document.
Figure 5 shows partial XML data generated from the RIDPad
document in Figure 1. It contains two
<Group>
elements (corresponding
to the two groups in Figure 1). The `Garlic' element
contains four
<Item>
elements (one for each item in that group
in Figure 1). There is also an
<Item>
element for the group-less
item CLIO. The
<Item>
element for `Goal' is partially expanded
to reveal the
<Mark>
, <
Container>
, <
Application>
, and
<
Context>
elements it contains. Contents of these elements are
not shown.
3.2 Example Bi-level Queries
We now provide bi-level query expressions for the queries Q1 to
Q3 listed in Section 1.
Q1: List base documents used in this RIDPad document.
This query must retrieve the path to the base document of the
mark associated with each item in a RIDPad document. The
following XQuery expression does just that. The Location
element in the Container element contains the path to the
document corresponding to the mark associated with an item.
<Paths> {FOR $l IN
document("source")//Item/Container/Location
RETURN <Path>{$l/text()}</Path>
} </Paths>
Q2: Show abstracts of papers related to Garlic.
This query must examine the context of items in the group labeled
`Garlic.' The following XPath expression suffices. This
expression returns the text of a context element whose name
attribute is `Abstract', but only for items in the required group.
//Group[@name='Garlic']/Item/Context//Elemen
t[@name='Abstract']/text()
Q3: Create an HTML table of contents from the groups and items.
We use an XSLT style-sheet to generate a table of contents (TOC)
from a RIDPad document. Figure 6 shows the query in the left
panel and its results in the right panel. The right panel embeds an
instance of MS Internet Explorer. The result contains one list item
(HTML LI tag) for each group in the RIDPad document. There is
also one list sub-item (also an HTML LI tag) for each item in a
group. The group-less item CLIO is in the list titled `Other Items.'
A user can save the HTML results, and open it in any browser
outside our system.
Figure 6: RIDPAD document transformed to HTML TOC.
The HTML TOC in Figure 6 shows that each item has a hyperlink
(HTML A tag) attached to it. A hyperlink is constructed using a
custom URL naming scheme and handled using a custom handler.
Custom URLs are one means of implementing Capability 5 identified
in Section 1.
DISCUSSION
The strength of the current implementation is that it retrieves
context information for only those parts of base documents that
the superimposed document refers to (via marks). Interestingly,
the same is also its weakness: it retrieves context information for
all parts of the base documents the superimposed document refers
to, regardless of whether executing a query requires those elements
. For example, only Query Q2 looks at context information
(Q1 looks only at container information, Q3 looks at superimposed
information and mark information). However, the XML
data generated includes context information for all queries. Generating
data in this manner is both inefficient and unnecessary-information
may be replicated (different items may use the same
mark), and context information can be rather large (the size of the
complete context of a mark could exceed the size of its docu-10
ment), depending on what context elements a context agent
provides. It is possible to get the same results by separating
RIDPad data from the rest and joining the various information
sources. Doing so preserves the layers, and potentially reduces the
size of data generated. Also, it is possible to execute a query in-crementally
and only generate or transform data that qualifies in
each stage of execution.
Figure 7 gives an idea of the proposed change to the schema of
the XML data generated. Comparing with the Goal Item element
of Figure 5, we see that mark, container, application, and context
information are no longer nested inside the Item element. Instead,
an
<Item>
element has a new attribute called
markID
. In the
revised schema, the RIDPad data, mark, container, application,
and context information exist independently in separate
documents, with references linking them. With the revised
schema, no context information would be retrieved for Query Q1.
Context information would be retrieved only for items in the
`Schematic Heterogeneity' group when Q2 is executed.
Figure 7: XML data in the revised schema.
Preserving the layers of data has some disadvantages. A major
disadvantage is that a user will need to use joins to connect data
across layers. Such queries tend to be error-prone, and writing
them can take too much time and effort. A solution would be to
allow a user to write bi-level queries as they currently do (against
a schema corresponding to the data in Figure 5), and have the
system rewrite the query to match the underlying XML schema
(as in Figure 7). That is, user queries would actually be expressed
against a view of the actual data. We are currently pursuing this
approach to bi-level querying.
Our current approach of grabbing context information for all
marks could be helpful in some cases. For example, if a query
workload ends up retrieving context of all (or most) marks, the
current approach is similar to materializing views, and could lead
to faster overall query execution.
The current implementation does not exploit relationships between
superimposed information elements. For example, Figure 8
shows the RIDPad document in Figure 1 enhanced with two relationships
`Uses' and `Addresses' from the item CLIO. A user may
exploit these relationships, to pose richer queries and possibly
recall more information. For example, with the RIDPad document
in Figure 8, a user could now pose the following queries: What
system does CLIO use? How is CLIO related to SchemaSQL?
Our initial use anticipated for bi-level queries was to query superimposed
and base information as a whole, but we have noticed
that superimposed application developers and users could use the
capability to construct and format (on demand) superimposed
information elements themselves. For example, a RIDPad item's
name may be a section heading. Such a representation of an item
could be expressed as the result of a query or a transformation.
Figure 8: A RIDPad document with relationships.
Bi-level queries could also be used for repurposing information.
For example, Query Q3 could be extended to include the contents
of items (instead of just names) and transform the entire RIDPad
document to HTML (like in Figure 6). The HTML version can
then be published on the web.
We have demonstrated bi-level queries using XML query
languages, but superimposed applications might benefit from
other query languages. The choice of the query language depends
largely on the superimposed information model (which in turn
depends on the task at hand). More than one query language may
be appropriate for some superimposed information models, in
some superimposed applications. For example, both CXPath [3]
and XQuery may be appropriate for some applications that use the
XML superimposed model.
The base applications we have worked with so far do not
themselves have query capabilities. If access to context or a selection
over context elements can be posed as a query in a base
application, we might benefit from applying distributed query-processing
techniques. Finally, the scope of a bi-level query is
currently the superimposed layer and the base information accessible
via the marks used. Some applications might benefit from
including marks generated automatically (for example, using IR
techniques) in the scope of a query.
RELATED WORK
SPARCE differs from mediated systems such as Garlic [4] and
MIX [1]. Sources are registered with SPARCE simply by the act
of mark creation in those sources. Unlike in Garlic there is no
need to register a source and define its schema. Unlike MIX,
SPARCE does not require a DTD for a source.
11
METAXPath [5] allows a user to attach metadata to XML elements
. It enhances XPath with an `up-shift' operator to navigate
from data to metadata (and metadata to meta-metadata, and so
on). A user can start at any level, but only cross between levels in
an upwards direction. In our system, it is possible to move both
upwards and downwards between levels. METAXPath is
designed to attach only metadata to data. A superimposed
information element can be used to represent metadata about a
base-layer element, but it has many other uses.
CXPath [3] is an XPath-like query language to query concepts,
not elements. The names used in query expressions are concept
names, not element names. In the CXPath model there is no
document root--all concepts are accessible from anywhere. For
example, the CXPath expression `/Item' and `Item' are equivalent
. They both return all Item elements when applied to the XML
data in Figure 5. The `/' used for navigation in XPath follows a
relationship (possibly named) in CXPath. For example, the expression
"/Item/{Uses}Group" returns all groups that are related
to an item by the `Uses' relationship when applied to an XML
representation of the RIDPad in Figure 8. CXPath uses predefined
mappings to translate CXPath expressions to XPath expressions.
There is one mapping for each concept name and for each direction
of every relationship of every XML source. In our system,
we intend to support multiple sources without predefined mappings
, but we would like our query system to operate at a
conceptual level like CXPath does.
As discussed in Section 4, preserving the layers of data, yet allowing
a user to express queries as if all data is in one layer
means queries are expressed against views. Information Manifold
[7] provides useful insight in to how heterogeneous source may be
queried via views. That system associates a capability record with
each source to describe its inputs, outputs, and selection capabilities
. We currently do not have such a notion in our system, but we
expect to consider source descriptions in the context of distributed
query processing mentioned in Section 4.
SUMMARY
Our existing framework for superimposed applications supports
examination and manipulation of individual superimposed and
base information elements. More global ways to search and manipulate
information become necessary as the size and number of
documents gets larger. A bi-level query system is a first step in
that direction. We have an initial implementation of a query
system, but still have a large space of design options to explore.
ACKNOWLEDGMENTS
This work was supported in part by US NSF Grant IIS-0086002.
We thank all reviewers.
REFERENCES
[1] Baru, C., Gupta, A., Ludscher, B., Marciano, R.,
Papakonstantinou, Y., Velikhov, P., and Chu, V. XML-Based
Information Mediation with MIX. In Proceedings of
the SIGMOD conference on Management of Data
(Philadelphia, June, 1999). ACM Press, New York, NY,
1999, 597-599.
[2] Bowers, S., Delcambre, L. and Maier, D. Superimposed
Schematics: Introducing E-R Structure for In-Situ
Information Selections. In Proceedings of ER 2002
(Tampere, Finland, October 7-11, 2002). Springer LNCS
2503, 2002. 90104.
[3] Camillo, S.D., Heuser, C.A., and Mello, R. Querying
Heterogeneous XML Sources through a Conceptual Schema.
In Proceedings of ER 2003 (Chicago, October 13-16, 2003).
Springer LNCS 2813, 2003. 186199.
[4] Carey, M.J., Haas, L.M., Schwarz, P.M., Arya, M., Cody,
W.F., Fagin, R., Flickner, M., Luniewski, A.W., Niblack,
W., Petkovic, D., Thomas, J., Williams, J.H., and Wimmers,
E.L. Towards heterogeneous multimedia information
systems: The Garlic approach. IBM Technical Report RJ
9911, 1994.
[5] Dyreson, C.E., Bohlen, M.H., and Jensen, C.S. METAXPath.
In Proceedings of the International Conference on Dublin
Core and Metadata Applications (Tokyo, Japan, October
2001). 2001, 17-23.
[6] Halasz, F.G., and Schwartz, F. The Dexter Hypertext
Reference Model. Communications of the ACM, 37, 2, 30-39
.
[7] Levy, A.Y., Rajaraman, A., and Ordille, J.J. Querying
heterogeneous information sources using source descriptions.
In Proceedings of VLDB (Bombay, India 1996). 251-262.
[8] Maier, D., and Delcambre, L. Superimposed Information for
the Internet. In Informal Proceedings of WebDB '99
(Philadelphia, June 3-4, 1999). 1-9.
[9] Microsoft. COM: The Component Object Model
Specification, Microsoft Corporation. 1995.
[10] Microsoft. MS XML 4.0 Software Development Kit.
Microsoft Corporation. Available online at
http://msdn.microsoft.com/
[11] Microsoft. XQuery Demo. Microsoft Corporation. Available
online at http://xqueryservices.com/
[12] Murthy, S., Maier, D., Delcambre, L., and Bowers, S.
Putting Integrated Information in Context: Superimposing
Conceptual Models with SPARCE. In Proceedings of the
First Asia-Pacific Conference of Conceptual Modeling
(Dunedin, New Zealand, Jan. 22, 2004). 71-80.
[13] Wiederhold, G. Mediators in the architecture of future
information systems. IEEE Computer, 25, 3 (March 1992).
3849.
12 | Bi-level queries;implementation;system;Superimposed information management;SPARCE;superimposed;document;management;RIDPAD;query;information;Information integration;METAXPath;hyperlink |
163 | Ranking Flows from Sampled Traffic | Most of the theoretical work on sampling has addressed the inversion of general traffic properties such as flow size distribution , average flow size, or total number of flows. In this paper, we make a step towards understanding the impact of packet sampling on individual flow properties. We study how to detect and rank the largest flows on a link. To this end, we develop an analytical model that we validate on real traces from two networks. First we study a blind ranking method where only the number of sampled packets from each flow is known. Then, we propose a new method, protocol-aware ranking, where we make use of the packet sequence number (when available in transport header) to infer the number of non-sampled packets from a flow, and hence to improve the ranking. Surprisingly, our analytical and experimental results indicate that a high sampling rate (10% and even more depending on the number of top flows to be ranked) is required for a correct blind ranking of the largest flows. The sampling rate can be reduced by an order of magnitude if one just aims at detecting these flows or by using the protocol-aware method. | INTRODUCTION
The list of the top users or applications is one of the most
useful statistics to be extracted from network traffic.
Network operators use the knowledge of the most popular
destinations to identify emerging markets and applications
or to locate where to setup new Points of Presence. Content
delivery networks use the popularity of sites to define
caching and replication strategies. In traffic engineering, the
identification of heavy hitters in the network can be used to
treat and route them differently across the network [20, 17,
10]. Keeping track of the network prefixes that generate
most traffic is also of great importance for anomaly detection
. A variation in the pattern of the most common applications
may be used as a warning sign and trigger careful
inspection of the packet streams.
However, the ability to identify the top users in a packet
stream is limited by the network monitoring technology.
Capturing and processing all packets on high speed links still
remains a challenge for today's network equipment [16, 9].
In this context, a common solution is to sample the packet
stream to reduce the load on the monitoring system and to
simplify the task of sorting the list of items. The underlying
assumption in this approach is that the sampling process
does not alter the properties of the data distribution.
Sampled traffic data is then used to infer properties of the
original data (this operation is called inversion). The inversion
of sampled traffic is, however, an error-prone procedure
that often requires a deep study of the data distribution to
evaluate how the sampling rate impacts the accuracy of the
metric of interest. Although the inversion may be simple
for aggregate link statistics (e.g., to estimate the number
of packets transmitted on a link, it is usually sufficient to
multiply the number of sampled packets by the inverse of
the sampling rate), it is much harder for the properties of
individual connections or "flows" [9, 11, 8].
For these reasons, in this paper, we address this simple,
and so far unanswered, question: which sampling rate is
needed to correctly detect and rank the flows that carry the
most packets?
We define the problem as follows. Consider a traffic monitor
that samples packets independently of each other with
probability p (random sampling) and classifies them into
sampled flows. At the end of the measurement period, the
monitor processes the list of sampled flows, ranks them
based on their size in packets, and returns an ordered list of
the t largest flows.
We are interested in knowing (i) whether the ordered list
contains all the actual largest flows in the original packet
188
stream (detection), and (ii) if the items in the list appear in
the correct order (ranking).
We build an analytical model and define a performance
metric that evaluates the accuracy of identification and ranking
of the largest flows. We consider a flow to consist of a
single TCP connection. However, our results are general
and can be applied to alternative definitions of flow, as well.
We evaluate two approaches to sort the list of flows:
(i) Blind, where the sampled flows are ranked just based
on their sampled size. This method can be applied to any
definition of flow.
(ii) Protocol-aware, where we make use of additional information
in the packet header (e.g., the sequence number
in TCP packets) to infer the number of non-sampled packets
between sampled ones. This method can only be applied to
flow definitions that preserve the protocol level details.
The contributions of this work are the following: (1) We
perform an analytical study of the problem of ranking two
sampled flows and compute the probability that they are
misranked. We propose a Gaussian approximation to make
the problem numerically tractable. (2) We introduce the
protocol-aware ranking method that uses protocol level information
to complement the flow statistics and render the
detection and ranking of the largest flows more accurate. (3)
Based on the model for the ranking of two flows, we propose
a general model to study the detection and ranking problem,
given a generic flow size distribution. We define a performance
metric and evaluate the impact of several metric's
parameter on the accuracy of the ranking. (4) We validate
our findings on measurement data using publicly-available
packet-level traces. Our results indicate that a surprisingly
high sampling rate is required to obtain a good accuracy
with the blind approach (10% and even more depending on
the number of flows of interest). As for the protocol-aware
approach, it allows to reduce the required sampling rate by
an order of magnitude compared to the blind approach.
The paper is structured as follows. Next, we discuss the
related literature. In Section 3 and 4, we present our model.
Section 5 analyzes the model numerically and Section 6 validates
it on real packet-level traces. Section 7 concludes the
paper and provides perspectives for our future research.
RELATED WORK
The inversion of sampled traffic has been extensively studied
in the literature. The main focus has been on the inversion
of aggregate flow properties such as flow size distribution
[9, 11], average flow size or total number of flows [8] on
a given network link. Duffield et al. [8] study the problem of
flow splitting and propose estimators for the total number
of flows and for the average flow size in the original traffic
stream. [9, 11] study the inversion of the flow size distribution
with two different methods. They both show that the
major difficulty comes from the number of flows that are not
sampled at all and that need to be estimated with an auxiliary
method. As an auxiliary method, [8, 9] propose the use
of the SYN flag in the TCP header to mark the beginning of
a flow. [9] shows that periodic and random sampling provide
roughly the same result on high speed links, and so random
sampling can be used for mathematical analysis due to its
appealing features. [4] finds the sampling rate that assures
a bounded error on the estimation of the size of flows contributing
to more than some predefined percentage of the
traffic volume. [14] studies whether the number of sampled
packets is a good estimator for the detection of large flows
without considering its impact on the flow ranking.
Given the potential applications of finding the list of top
users, it does not come as a surprise that there has been a
significant effort in the research community to find ways to
track frequent items in a data stream [5, 7, 3, 10]. However,
this problem has usually been addressed from a memory requirement
standpoint. All the works in the literature assume
that if the algorithm and the memory size is well chosen, the
largest flows can be detected and ranked with a high precision
. However, in the presence of packet sampling, even if
the methods rank correctly the set of sampled flows, there
is no guarantee that the sampled rank corresponds to the
original rank. The problem we address in this paper complements
these works as it focuses on the impact of sampling
on the flow ranking.
BASIC MODEL RANKING TWO FLOWS
In this section, we study the probability to misrank two
flows of original sizes S
1
and S
2
in packets. This probability
is the basis for the general model for detecting and ranking
the largest flows that we will present later. Indeed, the
detection and ranking of the largest flows can be transformed
into a problem of ranking over a set of flow pairs.
Without loss of generality, we assume S
1
< S
2
. We consider
a random sampling of rate p. Let s
1
and s
2
denote
the sizes in packets of both flows after sampling. The two
sampled flows are misranked if (i) s
1
is larger than s
2
, or
(ii) both flows are not sampled, i.e., their sampled sizes
equal to zero. By combining (i) and (ii), one can see that
the necessary condition for a good ranking is to sample at
least one packet from the larger flow (i.e., the smaller of
the two flows can disappear after sampling). The probability
to misrank the two flows can then be written as
P
m
(S
1
, S
2
) = P {s
1
s
2
}. For the case S
1
= S
2
, we consider
the two flows as misranked if s
1
= s
2
, or if both flows
are not sampled at all, i.e. s
1
= s
2
= 0.
We compute and study the misranking probability of two
flows of given sizes in the rest of this section. First, we consider
the blind ranking method where only the number of
sampled packets from a flow is known. For this method,
we express the misranking probability as a double sum of
binomials, then we present a Gaussian approximation to
make the problem tractable numerically. Second, we consider
the protocol-aware ranking method for which we calculate
a numerical-tractable closed-form expression of the
misraking probability. Note that the misranking probability
is a symmetric function, i.e., P
m
(S
1
, S
2
) = P
m
(S
2
, S
1
).
3.1
Blind ranking
With this method, s
1
and s
2
represent the number of
sampled packets from flows S
1
and S
2
. Under our assumptions
, these two variables are distributed according to a binomial
distribution of probability p. Hence, we can write
for S
1
< S
2
,
P
m
(S
1
, S
2
) = P {s
1
s
2
} =
S
1
i=0
b
p
(i, S
1
)
i
j=0
b
p
(j, S
2
). (1)
b
p
(i, S) is the probability density function of a binomial distribution
of probability p, i.e., the probability of obtaining i
successes out of S trials. We have b
p
(i, S) =
S
i
p
i
(1 - p)
S-i
for i = 0, 1, ..., S, and b
p
(i, S) = 0 for i < 0 and i > S. The
189
probability to misrank two flows of equal sizes is given by
P {s
1
= s
2
or s
1
= s
2
= 0} = 1 - P {s
1
= s
2
= 0}
= 1 S
1
i=1
b
2
p
(i, S
1
).
Unfortunately, the above expression for the misranking
probability is numerically untractable since it involves two
sums of binomials. For large flows of order S packets, the
number of operations required to compute such a probability
is on the order of O(S
3
), assuming that the complexity of
the binomial computation is on the order of O(S). The
problem becomes much more complex if one has to sum
over all possible flow sizes (i.e., O(S
5
)). For this reason, we
propose next a Gaussian approximation to the problem of
blind ranking that is accurate and easy to compute. We use
this approximation to study the ranking performance as a
function of the sampling rate and the flow sizes.
3.1.1
Gaussian approximation to blind ranking
Consider a flow made of S packets and sampled at rate
p. The sampled size follows a binomial distribution. However
, it is well known that the binomial distribution can be
approximated by a Normal (or Gaussian) distribution when
p is small and when the product pS is on the order of one
(flows for which, on average, at least few packets are sampled
) [21, pages 108109]. We assume that this is the case
for the largest flows, and we consider the sampled size of
a flow as distributed according to a Normal distribution of
average pS and of variance p(1 - p)S. Using this approximation
, one can express the misranking probability for the
blind ranking problem in the following simple form.
Proposition 1. For any two flows of sizes S
1
and S
2
packets (S
1
= S
2
), the Gaussian approximation gives,
P
m
(S
1
, S
2
)
1
2 erf c
|S
2
- S
1
|
2(1/p - 1)(S
1
+ S
2
)
,
(2)
where erfc(x) = (
2
)
x
e
-u
2
du is the complementary error
cumulative function.
Proof: Consider two flows of sizes S
1
and S
2
in packets
such that S
1
< S
2
. Their sampled versions s
1
and s
2
both
follow Normal distributions of averages pS
1
and pS
2
, and
of variances p(1 - p)S
1
and p(1 - p)S
2
. We know that the
sum of two Normal variables is a Normal variable. So the
difference s
1
- s
2
follows a Normal distribution of average
p(S
1
- S
2
) and of variance p(1 - p)(S
1
+ S
2
). We have then
this approximation for the misranking probability:
P
m
(S
1
, S
2
) = P {s
1
- s
2
0}
P V >
p(S
2
- S
1
)
p(1 - p)(S
1
+ S
2
)
=
1
2 erfc
S
2
- S
1
2(1/p - 1)(S
1
+ S
2
)
. (3)
V is a standard Normal random variable. Given the symmetry
of the misranking probability, one can take the absolute
value of S
2
- S
1
in (3) and get the expression stated in the
proposition, which is valid for all S
1
and S
2
.
For S
1
= S
2
, one can safely approximate the misranking
probability to be equal to 1. This approximation is however
of little importance given the very low probability of having
two flows of equal sizes, especially when they are large.
3.2
Protocol-aware ranking
Packets can carry in their transport header an increasing
sequence number. A typical example is the byte sequence
number in the TCP header. Another example could be the
sequence number in the header of the Real Time Protocol
(RTP) [19]. One can use this sequence number, when available
, to infer the number of non-sampled packets (or bytes
in the case of TCP) between sampled ones, and hence to improve
the accuracy of ranking. The size of the sampled flow
in this case is no longer the number of packets collected, but
rather the number of packets that exist between the first and
last sampled packets from the flow. Although this solution
is limited to flows whose packet carry a sequence number, we
believe that the study of this ranking method is important
given the widespread use of the TCP protocol. Our objective
is to understand how the use of protocol-level information
can supplement the simple, and more general, blind method
and if it is worth the additional overhead it introduces (i.e.,
storing two sequence numbers per flow record).
In the following, we calculate the misranking probability
of two flows of given sizes when using the protocol-aware
method. This probability will be used later in the general
ranking problem. The main contribution of this section is a
closed-form expression for the misranking probability that
is numerically tractable, without the need for any approximation
.
Let S be the size of a flow in packets. Let s
b
, s
b
=
1, 2, ..., S, denote the (packet) sequence number carried by
the first sampled packet, and let s
e
, s
e
= S, S - 1, ..., s
b
,
denote the sequence number carried by the last sampled
packet. Given s
b
and s
e
, one can estimate the size of the
sampled flow in packets to s = s
e
- s
b
+ 1. The error in
this estimation comes from the non-sampled packets that
are transmitted before s
b
and after s
e
. We give next the
distribution of s, which is needed for the computation of
the misranking probability, then we state our main result.
Before presenting the analysis, note that this new flow size
estimator only counts the packets that are transmitted with
distinct sequence numbers. In the case of TCP, this corresponds
to the number of bytes received at the application
layer, rather then the number of bytes carried over the network
. It is equivalent to assuming that the probability of
sampling a retransmitted (or duplicated) packet is negligible
. This is a reasonable assumption if the loss rate is low.
We will address this aspect in more detail in Section 6.
Consider a flow of size S 2 in packets. Using the above
definition for s, the sampled flow has a size of i packets,
i 2, with probability:
P {s = i} =
S-i+1
k=1
P {s
b
= k} P {s
e
= k + i - 1} .
We have P {s
b
= k} = (1 - p)
k-1
p, and P {s
e
= k + i - 1} =
(1 - p)
S-k-i+1
p. This gives
P {s = i} =
S-i+1
k=1
(1 - p)
k-1
p(1 - p)
S-k-i+1
p
= p
2
(1 - p)
S-i
(S - i + 1).
(4)
As for i = 0, we have P {s = 0} = (1 - p)
S
for S 1. And
for i = 1, we have P {s = 1} = p(1 - p)
S-1
S for S 1. It
is easy to prove that the cumulative distribution of s is the
190
following for all values of S:
P {s i = 0} = p(1 - p)
S-i
(S - i + 1) + (1 - p)
S-i+1
. (5)
We come now to the misranking probability, which we recall
is a symmetric function. For S
1
< S
2
, we have
P
m
(S
1
, S
2
) = P {s
2
s
1
} =
S
1
i=0
P {s
1
= i}
i
j=0
P {s
2
= j} .
(6)
And for S
1
= S
2
, we have
P
m
(S
1
, S
2
) = 1 S
1
i=1
P {s
1
= i}
2
.
(7)
Our main result is the following.
Proposition 2. For S
1
< S
2
, the misranking probability
is equal to
P
m
(S
1
, S
2
) = (1 - p)
S
1
(1 - p)
S
2
+ p(1 - p)
S
1
-1
S
1
[p(1 - p)
S
2
-1
S
2
+ (1 - p)
S
2
]
+ p
3
2
F (1 - p, 1 - p)
xy
+ p
2
F (1 - p, 1 - p)
x
,
where
F (x, y) = xy
S
2
-S
1
+1
+ ... + x
S
1
-1
y
S
2
-1
= xy
S
2
-S
1
+1
(1 - (xy)
S
1
-1
)/(1 - xy).
For S
1
= S
2
= S, the misranking probability is equal to
P
m
(S, S) = 1 - p
2
(1 - p)
2(S-1)
S
2
- p
4
2
G(1 - p, 1 - p)
xy
,
where
G(x, y) = xy + x
2
y
2
+ x
S-1
y
S-1
= (xy - (xy)
S
)/(1 - xy).
Proof: One can validate the results by plugging (4) and (5)
into (6) and (7).
Note that the main gain of writing the misraking probability
in such a condensed form is a complexity that drops
from O(S
3
) in (6) to O(S) in our final result. This gain
comes from the closed-form expression for the cumulative
distribution in (5), and from introducing the two functions
F (x, y) and G(x, y). These two latter functions transform
two series whose complexity is O(S
2
) into a closed-form expression
whose complexity is O(S).
We solve the derivatives in the above equations using the
symbolic toolbox of matlab, which gives explicit expressions
for the misranking probability. These expressions are simple
to compute, but span on multiple lines, so we omit them for
lack of space.
3.3
Analysis of the misranking probability
3.3.1
The blind case
We use the Gaussian approximation to study how the misranking
probability varies with the sampling rate and with
the sizes of both flows, in particular their difference. The
study of the impact of the flow sizes is important to understand
the relation between flow size distribution and ranking
of the largest flows.
The misranking probability is a decreasing function of the
sampling rate. It moves to zero when p moves to 1 and to 0.5
when p approaches zero
1
. Therefore, there exists one sampling
rate that leads to some desired misranking probability,
and any lower sampling rate results in larger error.
We study now how the misranking probability varies with
the sizes of both flows. Take S
1
= S
2
- k, k a positive
integer. From (2) and for fixed k, the misranking probability
increases with S
1
and S
2
(erfc(x) is an increasing function in
x). This indicates that it is more difficult to rank correctly
two flows different by k packets as their sizes increase in
absolute terms. The result is different if we take the size of
one flow equal to < 1 times the size of the second, i.e.,
S
1
= S
2
. Here, (S
1
- S
2
)/S
1
+ S
2
is equal to S
1
(1
)/1 + , which increases with S
1
. Hence, the misranking
probability given in (2) decreases when S
1
increases. We
conclude that, when the two flow sizes maintain the same
proportion, it is easier to obtain a correct ranking when they
are large in absolute terms.
We can now generalize the result above. One may think
that the larger the flows, the better the ranking of their
sampled versions. Our last two examples indicate that this
is not always the case. The ranking accuracy depends on
the relative difference of the flow sizes. In general, to have
a better ranking, the difference between the two flow sizes
must increase with the flow sizes and the increase must be
larger than a certain threshold. This threshold is given by
(2): the difference must increase at least as the square root
of the flow sizes. This is an interesting finding. In the context
of the general ranking problem, it can be interpreted
as follows. Suppose that the flow size has a cumulative distribution
function y = F (x). As we move to the tail of the
distribution
2
, the size of the flows to be ranked increases.
The ranking performance improves if the difference between
flow sizes increases faster than x. This is equivalent to
saying that dx/dy should increase with x faster than x.
All common distributions satisfy this condition, at least at
their tails. For example, with the exponential distribution
we have dx/dy e
x
(1/ is the average), while for the
Pareto distribution we have dx/dy x
+1
( is the shape).
3.3.2
The protocol-aware case
The first difference with the blind case is in the estimation
error (S - s = s
b
- 1 + S - s
e
), which can be safely assumed
to be independent of the flow size for large flows (only dependent
on p). This means that if two large flows keep the
same distance between them while their sizes increase, their
ranking maintains the same accuracy. Their ranking improves
if the difference between their sizes increases as well,
and it deteriorates if the difference between their sizes decreases
. So in contrast to the blind case, the threshold for
the ranking here to improve is that the larger flow should
have its size increasing a little faster than the smaller one. In
the context of the general ranking problem where flow sizes
are distributed according to a cumulative distribution function
y = F (x), and when the top flows become larger, the
protocol-aware ranking improves if the derivative dx/dy increases
with x. This is equivalent to saying that the function
F (x) should be concave, which is satisfied by most common
distributions at their tail. For blind ranking, concavity was
1
The Gaussian approximation does not account for the case
p = 0 where the misranking probability should be equal to
1 based on our definition.
2
Because we are more and more focusing on large flows or
because the number of available flows for ranking increases.
191
not enough to obtain a better ranking; the derivative dx/dy
had to increase faster than x. So in conclusion, the condition
to have a better ranking when we move to the tail of the
flow size distribution is less strict with the protocol-aware
method, which is an indication of its good performance.
The second difference with the blind case is in the relation
between the ranking accuracy and the sampling rate.
Consider two large flows of sizes S
1
and S
2
in packets, and
let s
1
and s
2
denote their sampled sizes. The coefficient of
variation of the difference s
2
- s
1
is an indication on how
well the ranking performs (a small coefficient of variation
results in better ranking
3
). It is easy to prove that this coefficient
of variation scales as 1/p for protocol-aware ranking
and as 1/p for blind ranking. This is again an important
finding. It tells that when the sampling rate is very small,
blind ranking could (asymptotically) perform better than
protocol-aware ranking. Our numerical and experimental
results will confirm this finding.
GENERAL MODEL DETECTING AND RANKING THE LARGEST FLOWS
We generalize the previous model from the ranking of
two flows to the detection and ranking of the top t flows,
t = 1, 2, . . . , N . The misranking probability P
m
(S
1
, S
2
) pre-viously
calculated is the basis for this generalization. Let
N t denote the total number of flows available in the measurement
period before sampling. We want the sampled list
of top t flows to match the list of top t flows in the original
traffic. Two criteria are considered to decide whether this
match is accurate. First, we require the two lists to be identical
. This corresponds to the ranking problem. The second,
less constrained, criterion requires the two lists to contain
the same flows regardless of their relative order within the
list. This corresponds to the detection problem. For both
problems, the quality of the result is expressed as a function
of the sampling rate p, the flow size distribution, the number
of flows to rank t, and the total number of flows N .
4.1
Performance metric
In order to evaluate the accuracy of detection and ranking
, we need to define a performance metric that is easy
to compute and that focuses on the largest flows. A flow
at the top of the list can be misranked with a neighboring
large flow or a distant small flow. We want our metric to
differentiate between these two cases and to penalize more
the latter one; a top-10 flow replaced by the 100-th flow in
the sampled top list is worse than the top-10 flow being replaced
by the 11-th flow. We also want our metric to be zero
when the detection and ranking of the top flows are correct.
We introduce our performance metric using the ranking
problem. The performance metric for the detection problem
is a straightforward extension. Let's form all flow pairs
where the first element of a pair is a flow in the top t and
the second element is anywhere in the sorted list of the
N original flows. The number of these pairs is equal to
N - 1 + N - 2 + + N - t = (2N - t - 1)t/2. We then count
the pairs in this set that are misranked after sampling and
we take the sum as our metric for ranking accuracy. This
3
For S
1
< S
2
, we are interested in P {s
1
s
2
}. According
to Tchebychev inequality, this probability can be supposed
to behave like VAR[s
1
- s
2
]/E [s
1
- s
2
]
2
, which is the square
of the coefficient of variation.
sum indicates how good the ranking is at the top of the list.
It is equal to zero when the ranking is correct. When the
ranking is not correct, it takes a value proportional to the
original rank of the flows that have taken a slot in the top-t
list. For example, if the top flow is replaced by its immediate
successor in the list, the metric will return a ranking error
of 1. Instead, if the same flow is replaced by a distant flow,
say the 100-th, the metric will return an error of 99. Also,
note that our metric does not account for any misranking of
flows outside the list of top t flows. For any two flows n and
m, such that n > m > t, the fact that n takes the position
of m does not add anything to our performance metric since
our metric requires at least one element of a flow pair to be
in the original list of top t flows.
In the detection problem, we are no longer interested in
comparing flow pairs whose both elements are in the top t
list. We are only interested in the ranking between flows
in the top t list and those outside the list. Therefore, our
detection metric is defined as the number of misranked flow
pairs, where the first element of a pair is in the list of top t
flows and the second element is outside this list (non top t).
The above metrics return one value for each realization
of flow sizes and of sampled packets. Given that we want
to account for all realizations, we define the performance
metrics as the number of misranked flow pairs averaged over
all possible values of flow sizes in the original list of N flows
and over all sampling runs. We deem the ranking/detection
as acceptable when our metric takes a value below one (i.e.,
on average less than one flow pair is misranked).
In addition to the above, our metrics have the advantage
of being easily and exactly calculable. Performance metrics
based on probabilities (e.g.,[12]) require lot of assumptions
that make them only suitable for computing bounds, but
not exact values.
4.2
Computation of the performance metric
for the ranking problem
Consider a flow of i packets belonging to the list of top t
flows in the original traffic (before sampling). First, we compute
the probability that this flow is misranked with another
flow of general size and general position. Denote this probability
by P
mt
(i), where m stands for misranking and t for
top. Then, we average over all values of i to get
P
mt4
. This
latter function gives us the probability that, on average, the
top t-th flow is misranked with another flow. Thus, our performance
metric, which is defined as the average number of
misranked flow pairs where at least one element of a pair
is in the top t, is equal to (2N - t - 1)t
P
mt
/2. Next, we
compute the value of
P
mt
.
Let p
i
denote the probability that the size of a general
flow is equal to i packets, and P
i
denote the flow size complementary
cumulative distribution, i.e., P
i
=
j=i
p
j
. For
a large number of flows N and a high degree of multiplexing,
we consider safe to assume that flow sizes are independent
of each other (see [2] for a study of the flow size correlation
on a OC-12 IP backbone link). A flow of size i belongs to
the list of top t flows if the number of flows in the original
total list, with a size larger than i, is less or equal than t - 1.
Since each flow can be larger than i with probability P
i
independently
of the other flows, we can write the probability
that a flow of size i belongs to the list of the top t flows
4
Note that the distribution of the size of a flow at the top
of the list is different from that of a generic flow.
192
as P
t
(i, t, N ) =
t-1
k=0
b
P
i
(k, N - 1), where b
P
i
(k, N - 1)
is the probability to obtain k successes out of N - 1 trials
, P
i
being the probability of a success. The probability
that the t-th largest flow has a size of i packets is equal
to P
t
(i) = p
i
P
t
(i, t, N )/
P
t
(t, N ).
P
t
(t, N ) is the probability
that a flow of general size is among the top t in the original
total list, which is simply equal to t/N .
Using the above notation, one can write the misranking
probability between a top t flow of original size i packets
and any other flow as follows
P
mt
(i) =
1
P
t
(i, t, N )
i-1
j=1
p
j
P
t
(i, t, N - 1)P
m
(j, i)+
j=i
p
j
P
t
(i, t - 1, N - 1)P
m
(i, j) .
(8)
In this expression, we sum over all possible original sizes
of the other flow (the variable j) and we separate the case
when this other flow is smaller than i from the case when it
is larger than i
5
. P
m
(i, j) is the misranking probability of
two flows of sizes i and j packets, which we calculated in the
previous section for the two ranking methods.
P
mt
is then
equal to
i=1
P
t
(i)P
mt
(i).
For protocol-aware ranking, P
m
(i, j) is given explicitly
in Proposition 2 and can be easily computed. For blind
ranking, we use the Gaussian approximation summarized in
Proposition 2, which we recall holds when at least one of the
two flows to be compared is large.
4.3
Computation of the performance metric
for the detection problem
Consider the probability that a flow among the top t is
swapped with a flow that does belong to the top t. Let
P
mt
denote this probability. Following the same approach
described in Section 4, we can write
P
mt
= 1
P
t
i=1
i-1
j=1
p
i
p
j
P
t
(j, i, t, N )P
m
(j, i).
To get this expression for
P
mt
, we sum over all possible
values for the size of the flow in the top t (index i) and
all possible values for the size of the other flow not among
the top t (index j). In this expression, p
i
and p
j
represent
the probability that the size of a flow is equal to i or j
packets, respectively. P
m
(j, i) is the probability that two
flows of sizes i and j are misranked it is given by the
Gaussian approximation described in Proposition 1 for the
blind method and the result stated in Proposition 2 for the
protocol-aware method. P
t
(j, i, t, N ) is the joint probability
that a flow of size i belongs to the list of the top t flows while
another flow of size j does not belong to it (i.e., it is in the
bottom N - t flows).
P
t
is the joint probability that a flow
of any size belongs to the list of the top t flows while another
flow of any size does not belong to this list. It is equal to
t(N - t)/(N (N - 1)).
We now compute P
t
(j, i, t, N ) for j < i, i.e., the probability
that flow i belongs to the top list while flow j does
not. The number of flows larger than i should be smaller
than t, while the number of flows larger than j should be
larger than t. The probability that a flow size is larger than
5
In the case j i, at most t - 2 flows can be larger than i
packets if we want the flow of size i to be in the top t.
Trace
Jussieu
Abilene
Link speed
GigE (1 Gbps)
OC-48 (2.5 Gbps)
Duration
2 hours
30 minutes
TCP connections
11M
15M
Packets
112M
125M
Table 1: Summary of the traces
i is P
i
=
k=i
p
k
. The probability that it is larger than j is
P
j
=
k=j
p
k
. The probability that a flow size is between
j and i given that it is smaller than i is (P
j
- P
i
)/(1 - P
i
).
We call it P
j,i
. It follows that:
P
t
(j, i, t, N ) =
t-1
k=0
b
P
i
(k, N - 2)
N -k-2
l=t-k-1
b
P
j,i
(l, N - k - 2).
The first sum accounts for the probability to see less than t
flows above i packets. The second sum accounts for the
probability to see more than t flows above j given that
k flows (k < t) were already seen above i. For t = 1,
P
t
(j, i, t, N ) is no other than P
t
(i, t, N - 1), and both
P
mt
and
P
mt
are equal (i.e., the ranking and the detection problems
are the same).
Once
P
mt
is computed, we multiply it by the total number
of flow pairs whose one element is in the top t and the other
one is not. This total number is equal to t(N -t). Our metric
for the detection problem is the result of this multiplication.
As for the ranking problem, we want this metric to be less
than one for the detection of the top t flows to be accurate.
NUMERICAL RESULTS
We analyze now the accuracy of identifying and ranking
the largest flows in a packet stream for both the blind and
protocol-aware methods. Our metrics require the following
input: p
i
, the flow size distribution and N , the total number
of flows observed on the link during the measurement period.
To derive realistic values for these two quantities, we consider
two publicly available packet-level traces. The first
trace is Abilene-I collected by NLANR [15] on an OC-48
(2.5 Gbps) link on the Abilene Network [1]. The second
trace has been collected by the Metropolis project [13] on
a Gigabit Ethernet access link from the Jussieu University
campus in Paris to the Renater Network [18]. Table 1 summarizes
the characteristics of the two traces.
We model the flow size distribution in the traces with
Pareto. We opted for Pareto since it is known to be appropriate
to model flow sizes in the Internet due to its heavy
tailed feature [6]. Note that it is not our goal to find an accurate
approximation of the distribution of flow sizes in our
traces, but rather to find a general, well-known, distribution
that approaches the actual flow size. In this section we analyze
a wide range of parameters while Section 6 focuses on
the performance we observe in the two packet-level traces.
The Pareto distribution is continuous with a complementary
cumulative distribution function given by P {S > x} =
(x/a)
. > 0 is a parameter describing the shape of the
distribution and a > 0 is a parameter describing its scale.
The Pareto random variable takes values larger than a, and
has an average value equal to a/( - 1). The tail of the
Pareto distribution becomes heavier as decreases.
We use our traces to derive an indicative value of the
shape parameter . To this end, we compute the empirical
complementary cumulative distribution of flow sizes and we
193
10
0
10
1
10
2
10
3
10
4
10
5
10
-8
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
0
Flow size in Kbytes
Complementary CDF
Abilene trace
x
-2
10
0
10
1
10
2
10
3
10
4
10
5
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
0
Flow size in Kbytes
Complementary CDF
Jussieu trace
x
-1.5
Figure 1: Empirical flow size distribution
plot it on a log-log scale. A heavy-tailed distribution of
shape parameter decays linearly on a log-log scale at rate
-. The empirical distributions are shown in Figure 1. The
plots show that equal to 2 suits the Abilene trace and
equal to 1.5 suits the Jussieu one. This means that the flow
size distribution has a heavier tail in the Jussieu trace.
Then, we compute the average flow size in packets to get
the starting point a for the Pareto distribution. As an average
flow size we measure 5.76 Kbytes and 7.35 packets on
the Abilene trace, and 9.22 Kbytes and 9.9 packets on the
Jussieu trace. The total number of flows N is set by taking a
measurement interval equal to one minute, then multiplying
this interval by the average arrival rate of flows per second
on each trace. This gives N = 487 Kflows for the Abilene
trace and N = 103 Kflows for the Jussieu one.
In the rest of this section, all figures plot the ranking metric
versus the packet sampling rate p on a log-log scale. We
vary p from 0.1% to 50%. Each figure shows different lines
that correspond to different combinations of t, , and N .
We are interested in the regions where the value of the metric
is below one, indicating that the ranking is accurate on
average. To ease the interpretation of results in the figures,
we plot the horizontal line of ordinate 1.
5.1
Blind ranking
5.1.1
Impact of the number of flows of interest
The first parameter we study is t, the number of largest
flows to rank. The purpose is to show how many flows can
be detected and ranked correctly for a given sampling rate.
We set , N , and the average flow size to the values described
before. The performance of blind ranking the top t
flows is shown in Figure 2 for both traces. We observe that
the larger the number of top flows of interest, the more difficult
it is to detect and rank them correctly. In particular,
with a sampling rate on the order of 1%, it is possible to
rank at most the top one or two flows. As we focus at larger
values of t, the required sampling rate to get a correct ranking
increases well above 10%. Note that with a sampling
rate on the order of 0.1%, it is almost impossible to detect
even the largest flow. We also observe that the ranking on
the Jussieu trace behaves slightly better than that on the
Abilene trace. The Jussieu trace has a heavier tail for its
flow size distribution, and so the probability to get larger
flows at the top of the list is higher, which makes the ranking
more accurate. This will be made clear next as we will
study the impact of the shape parameter .
5.1.2
Impact of the flow size distribution
We consider the blind ranking of the top 10 flows varying
10
-1
10
0
10
1
10
-2
10
0
10
2
10
4
10
6
Packet sampling rate (%)
Average number of misranked flow pairs
Abilene trace, N=487K, beta=2, blind ranking
t=25
t=10
t=5
t=2
t=1
10
-1
10
0
10
1
10
-2
10
0
10
2
10
4
10
6
Packet sampling rate (%)
Average number of misranked flow pairs
Jussieu trace, N=103K, beta=1.5, blind ranking
t=25
t=10
t=5
t=2
t=1
Figure 2: Performance of blind ranking varying the
number t of top flows of interest
the shape parameter for the Pareto distribution among five
distinct values: 3, 2.5, 2, 1.5 and 1.2. Note that for 2
the Pareto distribution is known to be heavy tailed (infinite
variance). The other parameters of the model (N and the
average flow size) are set as before. The values taken by our
metric are shown in Figure 3 for both traces. We can make
the following observations from the figure:
Given a sampling rate, the ranking accuracy improves
as becomes smaller, i.e., the tail of the flow size
distribution becomes heavier. Indeed, when the distribution
tail becomes heavier, the probability to obtain
larger flows at the top of the list increases, and since it
is simpler to blindly rank larger flows (for distributions
satisfying the square root condition, see Section 3.1.1),
the ranking becomes more accurate.
The ranking is never correct unless the sampling rate is
very high. In our setting, one needs to sample at more
than 50% to obtain an average number of misranked
flow pairs below one for a value of equal to 1.5 (i.e,
heavy tailed distribution), and at more than 10% for
a value of equal to 1.2 (i.e., pronounced heavy tailed
distribution). For larger values of (i.e., lighter tail),
the sampling rate needs to be as high as 100%.
5.1.3
Impact of the total number of flows
Another important parameter in the ranking problem is
N , the total number of flows available during the measurement
period. When N increases, the flows at the top of
the list should become larger, and therefore as we saw in
Section 3.1.1, the blind ranking accuracy should improve
194
10
-1
10
0
10
1
10
-1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
10
7
Packet sampling rate (%)
Average number of misranked flow pairs
Abilene trace, N = 487K, t = 10 flows, blind ranking
beta=3
beta=2.5
beta=2
beta=1.5
beta=1.2
10
-1
10
0
10
1
10
-1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
10
7
Packet sampling rate (%)
Average number of misranked flow pairs
Jussieu trace, N = 103K, t = 10 flows, blind ranking
beta=3
beta=2.5
beta=2
beta=1.5
beta=1.2
Figure 3: Performance of blind ranking varying the
shape parameter of the flow size distribution
for flow size distributions satisfying the square root condition
(in particular the Pareto distribution we are considering
here). N varies with the utilization of the monitored link
the higher the utilization, the larger the number of flows. N
can also vary with the duration of the measurement period
the longer we wait before ranking and reporting results,
the larger the number of flows.
We study the impact of N on the blind ranking accuracy
. We take the same value of N used in the previous
sections and computed over one minute measurement period
(487 Kflows for the Abilene trace and 103 Kflows for
the Jussieu trace), then we multiply it by some constant
factor ranging from 0.5 (2 times fewer flows) to 5 (5 times
more flows). Results are shown in Figure 4. The lines in
the figures correspond to a factor value equal to: 0.5, 1,
2.5, and 5. In these figures, we consider the ranking of the
top 10 flows with the values of and average flow size set
from the traces. Clearly, the ranking accuracy improves as
N increases. However, in our setting, this improvement is
still not enough to allow a perfect ranking. One can always
imagine increasing N (e.g., by increasing the measurement
period) until the top t flows are extremely large and hence,
perfectly detected and ranked.
5.2
Protocol-aware ranking
Protocol-aware ranking takes advantage of the information
carried in the transport header of the sampled packets
to infer the number of non-sampled packets of a flow. We
use our model to check whether this improvement exists and
to evaluate it. Remember that we are always in the context
of low retransmission and duplication rates, which is neces-10
-1
10
0
10
1
10
-1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
Packet sampling rate (%)
Average number of misranked flow pairs
Abilene trace, beta=2, t = 10, blind ranking
N=244K
N=487K
N=1.2M
N=2.4M
10
-1
10
0
10
1
10
-1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
Packet sampling rate (%)
Average number of misranked flow pairs
Jussieu trace, beta = 2, t = 10, blind ranking
N=52K
N=103K
N=258K
N=515K
Figure 4: Performance of blind ranking varying the
total number of flows
sary to remove the discrepancy between carried data volume
(throughput) and application data volume (goodput).
Using the previous values for N , and average flow size,
we reproduce Figure 2, but this time for the protocol-aware
case. This leads to Figure 5, which illustrates the impact of
the number of largest flows to rank. For lack of space, we
omit the other figures.
We compare this new figure to its counterpart in the blind
case. We make the following two observations:
(i) The protocol-aware method improves the accuracy of
the largest flows ranking by an order of magnitude for high
sampling rates (above 1%). For example, for the Abilene
trace, a sampling rate on the order of 50% was necessary to
detect and rank the largest 5 flows with the blind method.
Now, with the protocol-aware method, a sampling rate on
the order of 5% is sufficient. The same conclusion applies
to the Jussieu trace. A sampling rate on the order of 10%
is needed. With the protocol-aware method, it becomes on
the order of 1%.
(ii) The protocol-aware method does not improve the performance
when applied at low sampling rates (above 1%).
This can be clearly seen if we compare the plots between
both figures for sampling rates below 1%. This results confirms
our observations in Section 3.3.2.
5.3
Largest flows detection
To illustrate the difference between ranking and detection,
we consider the same scenario as in Section 5.1.1. We plot
the detection metric as a function of the sampling rate for
different values of t (the number of top flows of interest) and
for both Abilene and Jussieu traces. This gives Figure 6 for
195
10
-1
10
0
10
1
10
-2
10
0
10
2
10
4
10
6
Packet sampling rate (%)
Average number of misranked flow pairs
Abilene trace, N=487K, beta=2, protocol-aware ranking
t=25
t=10
t=5
t=2
t=1
10
-1
10
0
10
1
10
-2
10
0
10
2
10
4
10
6
Packet sampling rate (%)
Average number of misranked flow pairs
Jussieu trace, N=103K, beta=1.5, protocol-aware ranking
t=25
t=10
t=5
t=2
t=1
Figure 5: Performance of protocol-aware ranking
varying the number t of top flows of interest
blind ranking and Figure 7 for protocol-aware ranking. A
comparison between these results and their counterparts in
Figure 2 and 5, respectively, shows a significant improvement
in the detection case for both ranking methods. All
plots are shifted down by an order of magnitude. For example
, in the case of blind ranking, the required sampling
rate to correctly rank the top 5 flows was around 50% for
the Abilene trace and 10% for the Jussieu trace. Now, with
blind detection, it is around 10% and 3%, respectively. Another
example is with the protocol-aware method where a
sampling rate around 10% was required to rank the largest
10 flows (Figure 5), whereas now, a sampling rate around
1% is sufficient to only detect them. The same gain can be
observed if we reconsider the other scenarios in Section 5.1
(not presented here for lack of space). Also, note how in
the detection case the protocol aware method allows a better
accuracy for high sampling rates when compared to the
blind method. For low sampling rates (e.g., below 1%), the
accuracy does not improve.
EXPERIMENTAL RESULTS
In this section we present the results of running random
sampling experiments directly on the packet traces. We use
the traces described in Section 5 and compute the performance
metrics defined in Section 4.1.
In our traces we consider only TCP packets. Since TCP
sequence numbers count bytes, we express the flow sizes in
bytes instead of packets throughout this section.
Our experiments are meant to address four major issues
that arise when we move from the analytical study to a real
network setting: (i) how to deal with invalid TCP sequence
10
-1
10
0
10
1
10
-2
10
0
10
2
10
4
10
6
Packet sampling rate (%)
Average number of misranked flow pairs
Abilene trace, N=487K, beta=2, blind detection
t=25
t=10
t=5
t=2
t=1
10
-1
10
0
10
1
10
-2
10
0
10
2
10
4
10
6
Packet sampling rate (%)
Average number of misranked flow pairs
Jussieu trace, N=103K, beta=1.5, blind detection
t=25
t=10
t=5
t=2
t=1
Figure 6: Only detecting the largest flows: Performance
of blind ranking varying the number t of top
flows of interest
numbers in the packet stream; (ii) the importance of flow
size distributions and duration of the measurement interval;
(iii) the impact of packet loss rates on individual flows
lost packets trigger retransmissions by the TCP senders; (iv)
the variability of the detection/ranking performance across
multiple bins and packet sampling patterns.
6.1
Implementation of protocol-aware
ranking
The protocol-aware method depends on TCP sequence
numbers to perform the ranking. For a given flow, it keeps
track of the lowest and highest sequence number observed
(taking care of packets that wrap around the sequence number
space), s
b
and s
e
respectively.
Note that an actual implementation of this method would
just require two 32 bit fields per flow to store the two sequence
numbers.
At the end of the measurement period, we compute the
difference between the highest and lowest sequence numbers
for each sampled flow, and we use the obtained values to
rank flows. We then compare this ranking with the one
obtained by counting all the bytes each flow transmits in
the original non sampled traffic.
In order to discard invalid packets carrying incorrect sequence
numbers that would corrupt the ranking, we implement
a simple heuristic to update s
e
and s
b
. A sampled
packet with sequence number S > s
e
causes an update
s
e
S if S - s
e
< ( MTU)/p. The same rule applies to
the updates of s
b
. This way we set a limit on the maximum
distance in the sequence space between two sampled pack-196
10
-1
10
0
10
1
10
-2
10
0
10
2
10
4
10
6
Packet sampling rate (%)
Average number of misranked flow pairs
Abilene trace, N=487K, beta=2, protocol-aware detection
t=25
t=10
t=5
t=2
t=1
10
-1
10
0
10
1
10
-2
10
0
10
2
10
4
10
6
Packet sampling rate (%)
Average number of misranked flow pairs
Jussieu trace, N=103K, beta=1.5, protocol-aware detection
t=25
t=10
t=5
t=2
t=1
Figure 7: Only detecting the largest flows: Performance
of protocol-aware ranking varying the number
t of top flows of interest
ets. This distance is inversely proportional to the sampling
rate and depends on the Maximum Transmission Unit.
Furthermore, we use the parameter that allows to make
this threshold more or less "permissive" in order to account
for the randomness of the sampling process and for other
transport-layer events (e.g., packet retransmissions when the
TCP window is large). We have run several experiments
with different values of and the results have shown little
sensitivity to values of > 10. All the results in this Section
are derived with = 100.
6.2
Flow size distribution and measurement
interval
As shown in Figure 1, flow size distributions do not follow
a perfect Pareto. Furthermore, the measurement interval
itself plays a major role in shaping the distribution: it caps
the size of the largest flows, that is not unbounded but now
depends on the link speed. Indeed, network operators often
run measurements using a "binning" method, where packets
are sampled for a time interval, classified into flows, ranked,
and then reported. At the end of the interval, the memory is
cleared and the operation is repeated for the next measurement
interval. With this binning method, all flows active at
the end of the measurement interval are truncated, so that
not all sampled packets of the truncated flow are considered
at the same time for the ranking. The truncation may,
therefore, penalize large flows and alter the tail of the flow
size distribution (where flows are of large size and probably
last longer than the measurement interval).
Each experiment consists of the following. We run ran-10
-1
10
0
10
1
10
-1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
packet sampling rate %
average number of misranked flow pairs
Jussieu trace, blind ranking
top 1
top 2
top 5
top 10
top 25
10
-1
10
0
10
1
10
-1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
packet sampling rate %
average number of misranked flow pairs
Jussieu trace, protocol-aware ranking
top 1
top 2
top 5
top 10
top 25
Figure 8: Performance of blind and protocol-aware
ranking on Jussieu trace (60s measurement interval
).
dom sampling on the packet traces and classify the sampled
packets into flows. At the end of each measurement interval
(set to 1 or 5 minutes), we collect the flows and rank them
by the number of bytes sampled for each flow. We compare
the ranking before and after sampling using our performance
metric (Section 4.1). For each sampling rate we conduct 15
runs and we calculate averages.
The results of the experiments confirm the numerical results
of the previous section. In the interest of space, we
plot the results of two representative experiments on which
we make several observations. The difference between numerical
and experimental results, especially at low sampling
rates, is caused by the non perfect match of the empirical
flow size distribution with Pareto (Figure 1).
Figure 8 shows the performance of ranking flows on the
Jussieu trace when the measurement bin is 60s. We consider
a wide range of sampling rates from 0.1% to 50% and study
the performance when ranking the top 1, 2, 5, 10 and 25
flows in the packet stream. The top graph in Figure 8 is derived
using the blind method while the bottom graph shows
the performance of the protocol-aware methods. These results
are very similar to the numerical results. For sampling
rates above 1%, protocol-aware ranking gives approximately
an order of magnitude gain on the performance when compared
to blind ranking. When the sampling rate is lower
than 1%, however, the performance of the two methods is
similar. Overall, the blind method requires a sampling rate
of 10% to correctly identify the largest flow in the packet
stream. The same sampling rate allows to correctly rank
the largest 5 flows when using the protocol-aware method.
197
6.3
Impact of loss rate
In the analysis of the protocol-aware method in Section 3.2,
we made the assumption of negligible number of retransmissions
for all the flows in the packet stream.
A retransmitted packet may cause inconsistency between
the blind and protocol-aware method depending on the location
of the monitoring point. Indeed, the blind method
counts the total number of bytes sent by the flow while the
protocol-aware method considers only the data sent by the
transport layer. Therefore, if the packet is lost before the
monitoring point, the blind and protocol-aware method will
have a consistent view of the number of bytes sent. Instead,
if the packet is lost after the monitoring point, the blind
method may count this packet twice.
The impact of packet losses on the detection and ranking
of the largest flows depends on the metric used to estimate
the size of the flows. If flow sizes are estimated according to
the total number of bytes sent (i.e., the throughput), then
the protocol-aware method may incur in an underestimation
error that is independent of the sampling rate (it will occur
even if all packets are sampled!). On the other hand, if the
flow sizes are estimated according to the transport data sent
(i.e., the goodput), then the blind method may incur in an
overestimation error independently of the sampling rate.
To illustrate the effect of packet loss rates, we plot in
Figure 9 the performance of detecting the largest flows in the
Abilene trace when the measurement bin is 5 minutes and
the flow sizes are measured using the total number of bytes
sent over the link. The top graph shows the performance
of the blind method, while the bottom graph presents the
results for the protocol-aware method.
We can make the following observations:
The protocol-aware method keeps performing better
than the blind method when the sampling rate is above
1%. At lower sampling rates, the blind method performs
better although it presents very large errors.
For sampling rates above 2%, the curve relative to
the detection of the top-25 flows in the protocol-aware
method flattens to a value around 70. This is due
to the presence of a few flows that experience a high
loss rate when compared to other flows. Increasing
the sampling rate does not help the protocol-aware
method in detecting the largest flows when the volume
of bytes sent is used to define the flow size. However
, the protocol-aware method can correctly detect
the top-25 flows when their size is defined in terms of
transport data (see Figure 10).
In summary, the network operator has to choose the metric
of interest that depends on the application. For example
, for anomaly detection or traffic engineering, a metric
that counts the number of bytes sent may be more appropriate
. Instead, for dimensioning caches and proxies, the
metric that considers the size of the objects transferred may
be preferred. This latter metric suits more the protocol-aware
method.
6.4
Variability of the results
A last important aspect that we need to address is the
variability of the results across multiple measurement intervals
and different realizations of the sampling process.
Indeed, moving from one measurement interval to another,
10
-1
10
0
10
1
10
-1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
packet sampling rate %
average number of misranked flow pairs
Abilene trace, blind detection
top 1
top 2
top 5
top 10
top 25
10
-1
10
0
10
1
10
-1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
packet sampling rate %
average number of misranked flow pairs
Abilene trace, protocol-aware detection
top 1
top 2
top 5
top 10
top 25
Figure 9: Performance of blind (top) and protocol-aware
(bottom) detection on Abilene trace (300s
measurement interval).
the composition of flows varies and with it the flow size distribution
. Moreover, the sampling process may "get lucky"
in certain cases and provide good results. The opposite is
also possible.
Figure 11 shows the average performance over 15 sampling
experiments of the detection of the top-10 flows in
the Abilene trace over the 5-minute measurement intervals.
The error bars indicate the standard deviation across the
15 experiments. As usual, the top graph refers to the blind
method, while the bottom graph presents the protocol-aware
method results.
As we can see the average performance shows limited
variability. A sampling rate of 0.1% gives poor results for
all bins, while increasing the sampling rates consistently
helps. With a sampling rate of 10% the performance metric
(i.e., average number of misranked flow pairs) for the
blind method is always below 100 while the protocol-aware
method is always below 1.
Looking at the standard deviation, we observe large values
for the blind method and much smaller values for the
protocol-aware method. This indicates that the blind method
is more sensitive to the sampling process than the protocol-aware
method. The explanation is given in Section 3.3.2
where we showed that that the blind method presents a
larger error for large flow sizes (expect when the sampling
rate is very low).
CONCLUSIONS
We study the problem of detection and ranking the largest
flows from a traffic sampled at the packet level. The study is
198
10
-1
10
0
10
1
10
-1
10
0
10
1
10
2
10
3
10
4
10
5
10
6
packet sampling rate %
average number of misranked flow pairs
Abilene trace, protocol-aware detect
top 1
top 2
top 5
top 10
top 25
Figure 10: Performance of protocol-aware detection
on Abilene trace (300s measurement interval) when
using actual amount of data sent by the transport
layer application.
done with stochastic tools and real packet-level traces. We
find that the ranking accuracy is strongly dependent on the
sampling rate, the flow size distribution, the total number
of flows and the number of largest flows to be detected and
ranked. By changing all these parameters, we conclude that
ranking the largest flows requires a high sampling rate (10%
and even more). One can reduce the required sampling rate
by only detecting the largest flows without considering their
relative order.
We also introduce a new method for flow ranking that
exploits the information carried in transport header. By
analysis and experimentation, we demonstrate that this new
technique allows to reduce the required sampling rate by an
order of magnitude.
We are currently exploring two possible future directions
for this work. First, we want to study the accuracy of the
ranking when the sampled traffic is fed into one of the mechanisms
proposed in [10, 12] for sorting flows with reduced
memory requirements. Second, we are exploring the use of
adaptive schemes that set the sampling rate based on the
characteristics of the observed traffic.
Acknowledgements
We wish to thank NLANR [15], Abilene/Internet2 [1] and
the Metropolis project [13] for making available the packet
traces used in this work.
REFERENCES
[1] Abilene: Advanced networking for leading-edge research and
education. http://abilene.internet2.edu.
[2] C. Barakat, P. Thiran, G. Iannaccone, C. Diot, and
P. Owezarski. Modeling Internet backbone traffic at the flow
level. IEEE Transactions on Signal Processing (Special Issue
on Signal Processing in Networking), 51(8):21112124, Aug.
2003.
[3] M. Charikar, K. Chen, and M. Farach-Colton. Finding frequent
items in data streams. In Proceedings of ICALP, 2002.
[4] B. Y. Choi, J. Park, and Z. Zhang. Adaptive packet sampling
for flow volume measurement. Technical Report TR-02-040,
University of Minnesota, 2002.
[5] G. Cormode and S. Muthukrishnan. What's hot and what's
not: Tracking most frequent items dynamically. In Proceedings
of ACM PODS, June 2003.
[6] M. Crovella and A. Bestravos. Self-similarity in the World
Wide Web traffic: Evidence and possible causes. IEEE/ACM
Transactions on Networking, 5(6):835846, Dec. 1997.
0
200
400
600
800
1000
1200
1400
1600
1800
10
-2
10
0
10
2
10
4
10
6
10
8
time (sec)
average number of swapped flow pairs
Abilene trace, detection
sampling 0.1%
sampling 1%
sampling 10%
0
200
400
600
800
1000
1200
1400
1600
1800
10
-2
10
0
10
2
10
4
10
6
10
8
time (sec)
average number of swapped flow pairs
Abilene trace, detection
sampling 0.1%
sampling 1%
sampling 10%
Figure 11: Performance of blind (top) and protocol-aware
(bottom) detection over multiple 300s intervals
(Abilene trace). Vertical bars show the standard
deviation over multiple experiments.
[7] E. Demaine, A. Lopez-Ortiz, and I. Munro. Frequency
estimation of internet packet streams with limited space. In
Proceedings of 10th Annual European Symposium on
Algorithms, 2002.
[8] N. G. Duffield, C. Lund, and M. Thorup. Properties and
prediction of flow statistics from sampled packet streams. In
Proceedings of ACM Sigcomm Internet Measurement
Workshop, Nov. 2002.
[9] N. G. Duffield, C. Lund, and M. Thorup. Estimating flow
distributions from sampled flow statistics. In Proceedings of
ACM Sigcomm, Aug. 2003.
[10] C. Estan and G. Varghese. New directions in traffic
measurement and accounting. In Proceedings of ACM
Sigcomm, Aug. 2002.
[11] N. Hohn and D. Veitch. Inverting sampled traffic. In
Proceedings of ACM Sigcomm Internet Measurement
Conference, Oct. 2003.
[12] J. Jedwab, P. Phaal, and B. Pinna. Traffic estimation for the
largest sources on a network, using packet sampling with
limited storage. Technical Report HPL-92-35, HP
Laboratories, Mar. 1992.
[13] Metropolis: METROlogie Pour l'Internet et ses services.
http://www.laas.fr/ owe/METROPOLIS/metropolis eng.html.
[14] T. Mori, M. Uchida, R. Kawahara, J. Pan, and S. Goto.
Identifying elephant flows through periodically sampled
packets. In Proceedings of ACM Sigcomm Internet
Measurement Conference, Oct. 2004.
[15] NLANR: National Laboratory for Applied Network Research.
http://www.nlanr.net.
[16] Packet Sampling Working Group. Internet Engineering Task
Force. http://www.ietf.org/html.charters/psamp-charter.html.
[17] K. Papagiannaki, N. Taft, and C. Diot. Impact of flow
dynamics on traffic engineering design principles. In
Proceedings of IEEE Infocom, Hong Kong, China, Mar. 2004.
[18] Renater. http://www.renater.fr.
[19] H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson.
RTP: A transport protocol for real-time applications. RFC
1889, Jan. 1996.
[20] A. Shaikh, J. Rexford, and K. G. Shin. Load-sensitive routing
of long-lived IP flows. In Proceedings of ACM Sigcomm, Sept.
1999.
[21] M. Spiegel. Theory and Problems of Probability and
Statistics. McGraw-Hill, 1992.
199 | largest flow detection and ranking;validation with real traces;Packet sampling;performance evaluation |
164 | Ranking Target Objects of Navigational Queries | Web navigation plays an important role in exploring public interconnected data sources such as life science data. A navigational query in the life science graph produces a result graph which is a layered directed acyclic graph (DAG). Traversing the result paths in this graph reaches a target object set (TOS). The challenge for ranking the target objects is to provide recommendations that reflect the relative importance of the retrieved object, as well as its relevance to the specific query posed by the scientist. We present a metric layered graph PageRank (lgPR) to rank target objects based on the link structure of the result graph. LgPR is a modification of PageRank; it avoids random jumps to respect the path structure of the result graph. We also outline a metric layered graph ObjectRank (lgOR) which extends the metric ObjectRank to layered graphs. We then present an initial evaluation of lgPR. We perform experiments on a real-world graph of life sciences objects from NCBI and report on the ranking distribution produced by lgPR. We compare lgPR with PageRank. In order to understand the characteristics of lgPR, an expert compared the Top K target objects (publications in the PubMed source) produced by lgPR and a word-based ranking method that uses text features extracted from an external source (such as Entrez Gene) to rank publications. | INTRODUCTION
The last few years have seen an explosion in the number
of public Web accessible data sources, Web services and semantic
Web applications. While this has occurred in many
domains, biologists have taken the lead in making life science
data public, and biologists spend a considerable amount of
time navigating through the contents of these sources, to
obtain information that is critical to their research.
Providing meaningful answers to queries on life science
data sources poses some unique challenges. First, information
about a scientific entity, e.g., genes, proteins, sequences
and publications, may be available in a large number of autonomous
sources and several sources may provide different
descriptions of some entity such as a protein. Second,
the links between scientific objects (links between data entries
in the different sources) are important in this domain
since they capture significant knowledge about the relationship
and interactions between these objects. Third, interconnected
data entries can be modeled as a large complex
graph. Queries could be expressed as regular expression navigational
queries and can more richly express a user's needs,
compared to simpler keyword based queries.
Consider the following navigational query: Retrieve publications
related to the gene 'tnf ' that are reached by traversing
one intermediate (protein or sequence) entry. This query
expresses the scientist's need to expand a search for gene related
publications beyond those publications whose text directly
addresses the 'tnf' gene, while still limiting the search
to publications that are closely linked to gene entries.
Consider gene sources OMIM Gene and Entrez Gene, protein
sources NCBI Protein and SwissProt, sequences in NCBI
Nucleotide and biomedical publications in PubMed. Figure
1 represents the results of evaluating this navigational query
against these sources. The result is a layered DAG; we refer
to it as a result graph (RG). All paths in this directed result
graph (RG) start with data entries in the sources OMIM
Gene or Entrez Gene; this is the first layer. They visit one
intermediate data entry in sources NCBI Protein, Swiss Prot
or NCBI Nucleotide (second layer) and they terminate in a
publication data entry in PubMed (final layer).
The query returns all objects in PubMed that are reached
27
"tnf"
keyword
"tnf"
keyword
NCBI Nucleotide
Swiss Prot
PubMed
NCBI Protein
OMIM
Gene
Entrez
Gene
Figure 1: An example of a result graph (RG)
by traversing results paths; these PubMed entries are re-ferred
to as the target object set (TOS) reached by traversing
the result paths of the RG. In contrast, a keyword based
query would not have been able to specify the set of target
publications. Navigational queries, the RG and the target
object set (TOS) that answers the query are defined in the
paper.
It is difficult for a user to explore all target objects in a
reasonable amount of time and it is important to provide
a ranking of the TOS. As is well known, word based ranking
methods are very good at identifying the most relevant
results, typically using features extracted from the contents
of the target objects. For example [13] produces a ranking
of documents in PubMed that are most relevant to a gene.
In contrast, PageRank [11] focuses on the importance of the
target object and importance is transferred from other important
objects via the link structure. A recent technique
ObjectRank [1] addresses both relevance and importance; it
exploits schema knowledge to determine the correct authority
transfer between important pages. We note that there
is also research on ranking paths [2]. For term-based query
dependent ranking, we refer to [3, 12].
The focus of this paper is to produce a ranking method
to select the best target objects in the RG that answer the
navigational query. Our ideal ranking must identify target
objects that are both relevant and important. The ranking
must also be query dependent since we must guarantee that
the target objects that are ranked indeed occur in the RG
and answer the navigational query. Further, both relevance
and importance must be determined with respect to the objects
in TOS, rather than with respect to all the data entries
(as is the case with PageRank).
We propose two ranking metrics for the layered graph
RG; they are layered graph PageRank (lgPR) and layered
graph ObjectRank (lgOR). lgPR extends PageRank by distinguishing
different roles (intermediate node, answer node)
which can be played by the same node in the result graph.
It does not perform random jumps so as to respect the RG.
Our second metric lgOR is an extension to ObjectRank; due
to space limitations we only discuss it briefly.
We report on our preliminary evaluation of lgPR on a real
dataset from NCBI/NIH. For some navigational queries, we
apply lgPR to the corresponding RG and use the ranking
distribution for lgPR to illustrate that lgPR indeed discriminates
among the TOS objects. We also apply the original
PR metric to the object graph of life science data (against
which we evaluate the query). We compare with applying
lgPR to the actual RG to illustrate that lgPR and PR produce
dissimilar rankings.
Finally, we report on an initial user experiment. We consider
a set of complex queries typical of a scientist searching
for gene related PubMed publications, and the Top K
results of a word based ranking technique (Iowa) that has
been shown to be accurate in answering gene queries [13].
We compare the Iowa Top K publications with the lgPR Top
K publications, for some sample gene related queries, using
criteria that reflect both relevance and importance. We use
these criteria to understand the characteristics of lgPR.
The paper is organized as follows: Section 2 describes the
data model, navigational query language and layered DAG
result graph. Section 3 presents PageRank, lgPR, ObjectRank
, and lgOR. 4 reports on preliminary results of an
experimental study with NCBI data and concludes.
DATA MODEL
We briefly describe a data model and navigational query
language for the life science graph. Details in [6, 9, 14].
2.1
Data Model for the Life Science Graph
The data model comprises three levels: ontology, source
and data (Figure 2). At the ontology level, a domain ontol-Gene
Marker
Publication
Nucleotide
Protein
Disease
OMIM
Gene
\NCBI
Gene
PubMed
OMIM
Disease
Swiss
Prot
NCBI
Protein
NCBI
Nucleotide
UniSTS
Ontology Level
Source Level
Mappings
Data Level
Mappings
Figure 2: A Data Model for the Life Science Graph
ogy describes the universe of discourse, e.g., a gene, a pro-28
tein, etc., and the relationships among them. An ontology
graph OG = (C, L
C
) models the domain ontology, where
nodes in C represent classes, and edges in L
C
correspond to
relationships among classes. For example, genes and publications
are classes in OG and the association discuss relates
publications with genes.
In this paper, we only consider
one type of link, isRelatedTo, to capture the semantics of a
relationship; therefore, we omit all link labels.
At the source level, a source graph SG = (S, L
S
) describes
data sources and links that implement logical classes
(C) and associations (L
C
) in OG, respectively. For example,
PubMed and Entrez Gene are sources that implement the
logical classes publications and genes, respectively. A mapping
defines logical classes in C in terms of the sources in S
that implement them. A link between sources represents a
hyper-link, a service or an application that connects these
two sources. At the data level, a Data Graph is a graph
(D, L
D
), where D is a set of data entries and L
D
is a set
of references between entries. A mapping m
S
establishes
which data entries in D are published by source S.
2.2
Navigational Query Language
We define a query as a path expression over the alphabet
C in OG, where each class occurrence can optionally be an-notated
with a Boolean expression. The simplest Boolean
expression is the comparison of a Field to a particular value.
In this paper, a field can be either source or Object content,
and the relational operators can be "=" for source and "contains"
for Object content. A condition over source and the
relational operator "=", (source = "name-of-source"), restricts
the query to some specific sources that implement
the class. A condition on Object content and the relational
operator "contains", specifies the set of keywords that must
occur within objects in the Data Graph. The symbol
is
a wild-card matching any class and the "." represents any
relationship.
The query: Retrieve publications that are related to the
gene "tnf or aliases in human" in OMIM or Enrtez Gene,
and are reached by traversing one intermediate resource, is
expressed in the navigational query language as follows: Q =
Gene[Object content contains
{"tnf" and aliases in human}
and source = OMIM or Entrez Gene]
Publication
The answer to a query Q is defined at the three levels of
the data model. It comprises three sets of paths:
OG
(Q),
SG
(Q) and
DG
(Q). The meaning of query Q with respect
to the ontology graph OG,
OG
(Q), is the set of simple paths
in OG that correspond to words in the language induced by
the regular expression Q. The meaning of the query with
respect to the source graph SG,
SG
(Q), is the set of all
simple paths in SG that correspond to mappings of the paths
in
OG
(Q). Finally, the answer for query Q with respect to
the data graph DG,
DG
(Q), is the set of simple paths in
DG that are the result of mapping the paths in
SG
(Q)
using mapping function m
S
. A simple path does not repeat
(revisit) the same class, data source or data entry (in the
same path).
The queries that are presented in this section are typical
queries posed by researchers. At present, there are no
query evaluation engines to answer navigational queries and
researchers must rely on manual navigation via browsers or
they must write scripts; the latter involves labor to keep
writing the scripts and the scripts may be inefficient in answering
these queries.
2.3
Result Graph
The union of paths in
DG
(Q) is the result graph RG. We
note that for our query language, all the paths that satisfy
a query are of the same length, i.e., all the paths in the
sets
OG
(Q),
SG
(Q) and
DG
(Q) are of the same length.
We model a result graph RG
Q
= (D
RG
, L
RG
) for a query
Q, as a layered directed acyclic graph comprising k layers,
L
1
, ..., L
k
, where k is determined by the query. The set of
nodes D
RG
corresponds to the union of the data entries that
appear in the paths in
DG
(Q). L
RG
represents the links
among these data entries. A layer L
i
is composed of the
union of the data entries in the paths
DG
(Q) that appear
in the i-th position of the paths. The data entries in the
k-th layer are called the target objects and they form the
target object set (TOS) of the RG.
Note that since the result graph has multiple paths, and
since a source may occur in different layers of these paths,
the same data entry may appear multiple times in the different
layers, depending on its connectivity to other data
entries. In this case, each occurrence of the data entry is
represented independently within each layer/path in which
it occurs. The result graph framework distinguishes the different
roles (intermediate node, answer node) which can be
played by the same node in the result graph.
Figure 1 is a layered RG for the following query: Retrieve
publications related to the gene "tnf " traversing one intermediate
source; it has three layers. The first layer corresponds
to the genes in the sources OMIM Gene and Entrez Gene
that are related to the keyword "tnf". The second layer are
the entries in the sources NCBI Protein, Swiss Prot or NCBI
Nucleotide that are reached by objects in the first layer. Finally
, the target objects in the third layer (TOS) are the
publications in PubMed that are linked to the objects in
the second layer.
RANKING METRICS
We briefly describe the PageRank metric [11] and then
discuss our metric lgPR for layered DAGs. We briefly discuss
the ObjectRank metric [1] and our extension lgOR.
3.1
PageRank
PageRank assumes that links between pages confer authority
. A link from page i to page j is evidence that i is
suggesting that j is important. The importance from page
i that is contributed to page j is inversely proportional to
the outdegree of i. Let N
i
be the outdegree of page i. The
corresponding random walk on the directed web graph can
be expressed by a transition matrix A as follows:
A[i, j] =
1
N
i
if there is an edge from i to j
0
otherwise
Let E be an arbitrary vector over the webpages, representing
the initial probability of visiting a page. Let d be
the probability of following a link from a page and let (1
-d)
be the probability of a random jump to a page. The PageRank
ranking vector R = dA
R + (1 - d)E. R converges for
the web graph with any E, since generally the web graph is
aperiodic and irreducible[5, 10].
PageRank cannot be directly applied to a layered graph.
A Markov Chain is irreducible if and only if the graph contains
only one strongly connected component. RG is not
29
outgoing links with respect to the query.
There are several potential ways to extend PageRank for
RG. First, one can ignore links that point to pages without
outgoing edges since these pages do not affect the ranking
of other pages [11]. However we are specifically interested
in obtaining a ranking for the TOS or the objects in the
last layer of the layered result graph RG with no outgoing
links, we cannot ignore these pages.
Another possibility
is modifying the transition matrix probability so that one
takes a random jump from a node in the TOS [5]. This
will ensure that the graph will be irreducible and aperiodic.
However, this would arbitrarily modify RG whose structure
is determined by the query; modifying RG will not assure
that it answers the query. To summarize, the extensions to
PageRank in the literature cannot be applied to the problem
of ranking the target object set TOS of RG.
3.2
Layered Graph PageRank(lgPR)
We describe layered graph PageRank to rank the TOS.
3.2.1
The Metric
Table 1 lists the symbols used to compute lgPR.
Symbol
Meaning
RG(V
RG
, E
RG
)
Result Graph, a layered DAG, with
objects V
RG
and edges E
RG
e
E
RG
an edge in E
RG
R
ranking vector for objects in RG
R
ini
initial ranking vector
A
lg
the transition matrix for objects in RG
k
the number of layers in the result graph
OutDeg
RG
(u
p
)
outdegree from object u at layer p
(across multiple link types to objects in
layer p + 1
Table 1: Symbols used by lgPR
The layered DAG result graph RG is represented by a
transition matrix A
lg
to be defined next. Note that an object
in the object graph may occur in multiple paths of the
result graph, in different layers; it will be replicated in the
transition matrix for each occurrence. Each object u at layer
p will have an entry in the transition matrix to some object
v at layer q. We denote the occurrence of them as u
p
and
v
q
respectively.
The ranking vector R is defined by a transition matrix
A
lg
and initial ranking vector R
ini
, is as follows:
R = A
k
-1
lg
R
ini
= (
k
-1
l=1
A
lg
) R
ini
We pick R
ini
as follows: the entry for an object in R
ini
is
1 if this object is a link in start layer and 0 otherwise. The
transition matrix A
lg
is computed as follows:
A
lg
[i
p
u
, j
q
v
] =
1
OutDeg
RG
(u
p
)
if OutDeg
RG
(u
p
) > 0
and e(u
p
, v
q
)
E
RG
,
0
otherwise.
Note that we define the outdegree of each object in RG
to only consider those edges that actually occur in RG and
link to objects in the next layer. This reflect the probability
that a user follows an object path in the RG. In contrast,
PageRank considers all outgoing edges from a page.
Unlike PageRank, lgPR differentiates the occurrence of a
data entry in different layers, as well as the links to entries
in subsequent layers; lgPR is thus able to reflect the role of
objects and links (from the entire graph of data entries) in
answering a navigational query. Suppose an object a occurs
in an intermediate layer as well as in the TOS of the RG. It
is possible that a is able to convey authority to other objects
in the TOS. However, a may not rank very high in the TOS
for this query. This characteristic is unique to lgPR. Thus,
the score associated with the object is query dependent to
reflect the role played by the object in the result graph.
3.2.2
Convergence Property
This transition matrix A
lg
is neither irreducible nor aperiodic
as all rows for target objects contain only 0's. The
matrix A is a nilpotent matrix and the number of layers is
the index. We provide two defintions (details in [8]).
Definition
3.1. A square matrix A is a nilpotent matrix,
if there exists some positive integer k such that A
k
= 0 but
A
k
-1
= 0. Integer k is known as the index of A.
Definition
3.2. Let k be the index of A.
{A
k
-1
x, A
k
-2
x,
..., Ax, x
} form a Jordan Chain, where x is any vector such
that A
k
-1
x = 0.
A characteristic of a nilpotent matrix is that its only eigenvalue
is 0. The consequence is that any vector x is an eigenvector
of A as long as Ax = 0. From the previous definition
{A
k
-1
lg
R
ini
, A
k
-2
lg
R
ini
, ..., A
lg
R
ini
, A
lg
} forms a Jordan
Chain, since A
k
-1
lg
R
ini
= 0.
We show following two lemmas without providing proof
in this paper.
Lemma
3.3. Jordan chain
{A
k
-1
lg
R
ini
, A
k
-2
lg
R
ini
, ...,
A
lg
R
ini
, A
lg
} is a linearly independent set.
Lemma
3.4.
{A
k
-1
lg
R
ini
, A
k
-2
lg
R
ini
, ..., A
lg
R
ini
, A
lg
} consists
of a sequence of ranking vectors. In R
ini
, only objects
in layer 0 have non-zero scores; In ranking vector A
m
lg
R
ini
,
only objects in layer m receive non-zero scores.
The final ranking vector by lgPR is the first eigenvector
in the Jordan Chain, given the above initial ranking vector
R
ini
and the transition matrix A
lg
. While the traditional
PageRank algorithm converges on a ranking in multiple iterations
, lgPR can be computed in exactly k
- 1 iterations.
Note that because RG is a layered DAG, we can use link
matrices, each of which represents links between neighboring
layers, instead of the single transition matrix A
lg
for the
entire graph. We also use keywords to filter query answers
at each iteration.
3.3
Layered Graph ObjectRank(lgOR)
PR is computed a priori on the complete data graph and is
independent of the RG. A recent technique ObjectRank [1]
extends PageRank to consider relevance of query keywords.
It exploits schema knowledge to determine the correct authority
transfer in a schema graph.
In ObjectRank, the
authority flows between objects according to semantic connections
. It does so by determining an authority weight for
each edge in their schema graph. The ranking is (keyword)
query dependent.
Due to space limitations, we do not provide the details
of the ObjectRank metric. Instead, we briefly describe how
irreducible since the last layer in RG contains nodes with no
30
the transition matrix for lgPR can be extended to consider
the authority weights associated with edges that occur in
RG.
Consider a metric layered graph ObjectRank(lgOR). The
difference from lgPR is the transition matrix A
OG
. It is as
follows:
A[i
p
u
, j
q
v
] =
(e
E
RG
)
if e(u
p
, v
q
)
E
RG
,
0
otherwise.
(e
E
RG
) =
(e
ESG
)
OutDeg(u
p
,e
ESG
)
if OutDeg(u
p
, e
E
SG
) > 0
0
if OutDeg(u
p
, e
E
SG
) = 0
Let the edge between u
p
and v
q
map to an edge E
SG
in
the SG. (E
SG
) represents the authority transfer weight
associated with E
SG
. OutDeg(u
p
, e
E
SG
) is the outdegree in
RG of type E
SG
.
As discussed in [1], the success of ObjectRank depends on
correctly determining the authority weight to be associated
with each link. Figure 3 (next section) illustrates the source
graph that we use in our evaluation of navigational queries.
For lgOR to be successful, an authority weight may have
to be associated with each link in each result path (type)
in the RG. Experiments with users to determine the correct
authority weights for lgOR is planned for future work.
Currently the importance is computed after query evaluation
. We compute result graph first, then ranking, for
the reason that the transition matrix is defined in terms of
outdegree in the RG. This motivates further research of combination
of two problems, whose ideal solution is to ranking
objects during query evaluation.
EXPERIMENTS ON LGPR
We report on experiments on real world data. We show
that the lgPR ranking distribution has the ability to differentiate
among the target objects of the RG and it is different
from PageRank. A user compared the Top K results
of lgPR and a word based ranking (Iowa) [13], using criteria
that reflect both importance and relevance, to determine
their characteristics.
4.1
Experiment Setting
NCBI/NIH is the gatekeeper for biological data produced
using federal funds in the US
1
. We consider a source graph
SG of 10 data sources and 46 links. Figure 3 presents the
source graph used in this task.We used several hundred keywords
to sample data from these sources (the EFetch utility)
and followed links to the other sources (the ELink utility).
We created a data graph of approximately 28.5 million objects
and 19.4 million links. We note that several objects are
machine predicted objects so it is not uncommon that they
have no links. The object identifiers for the data entries
(nodes of the data graph) and the pair of object identifiers
(links) were stored in a DB2 relational database.
Table 2 identifies the queries and keywords that were used
in this experiment. The symbols g, p, n, s refer to classes
gene, publication, nucleotide and SNP, respectively. Note
that
is the wild card and can match all the classes and
sources (in the source graph).
For each navigational query, the source paths that answer
the query were determined using an algorithm described in
1
www.ncbi.nlm.nih.gov
Class
PubMed
PmId
Title
Class Author
Name
Class Journal
Name
ClassYear
Year
(1,*)
(1,1)
(1,*)
Class Lash
Terms
TermId
Descrip
(1,*)
(0,*)
Class Entrez
Gene
EGId
(1,*) (1,1)
Class Geo
GeoId
ClassOMIM
CddId
Class UniSTS
USId
Class
UniGene
UGId
Class Entrez
Protein
EPId
Class dbSNP
dbSID
Class CDD
CddId
Class Entrez
Nucleotide
NuId
(1,*)
Figure 3: Source Graph for User Evaluation
Queries
g.n.p, g.s.p, g.n.s.p, g.s.n.p, g.s.g.n.p, g.s.n.g.p,
g. .p, g. . .p, g. . . .p
"parkinson disease", "aging","cancer"
Keywords
"diabetes", "flutamide", "stress"
"degenerative joint","tnf","insulin"
"fluorouracil", "osteoarthritis","sarcoma"
Table 2: Experiment setting
[14]. Evaluating the paths in the data graph for each source
path was implemented by SQL queries. Since a result graph
RG could involve multiple source paths whose computation
may overlap we applied several multiple query optimization
techniques. The SQL queries were executed on DB2 Enterprise
Server V8.2 installed on a 3.2 GHz Intel Xeon processor
with 1GB RAM. The execution time for these queries varied
considerably, depending on the size and shape of RG. If we
consider the query g.n.p with keyword "degenerative joint"
used to filter 'g', one source path was ranked in approximately
1 second. However, the query (g. . . .p) with the
keyword aging used to filter 'g' created a very large result
graph and the execution time for this was approximately
2000 seconds. Typically the We note that computing the
high scoring TOS objects of the RG efficiently is a related
but distinct optimization problem.
4.2
lgPR Distribution
We report on the query (g. .p), i.e., paths from genes to
publications via one intermediate source.
Figures 4 and 5 report on the distribution of scores produced
by the lgPR metric for the target objects in TOS
for some representative queries. The first 10 bars represent
scores in the range (0.00-0.01) to (0.09-0.1) and the last bar
represents the range (0.1-1.0).
Fig 4 shows that a small
number of objects have very high score and the majority
have a low score. As expected, many queries and keywords
produced distributions that were similar to Figure 4. Most
of the objects in TOS, in this case approx. 12,000 objects,
had a very low score, and less than 200 object had a score
in the range (0.1-1.0).
However, we made an interesting observation that some
queries produced distributions that were similar to Figure
5. In this case, while many of the results (approx. 120) had
low scores in the range (0.00-0.02), 46 objects had scores in
the range (0.1-1.0) and 120 objects had scores in between.
31
.00-.01
.01-.02
.02-.03
.03-.04
.04-.05
.05-.06
.06-.07
.07-.08
.08-.09
.09-.10
.10-1.00
0
2000
4000
6000
8000
10000
12000
12183
677
404
172
105
62
52
43
34
19
197
lgPR score
Number of objects
Figure 4: Histogram for query: g[Object content contains
"aging"]
p
.00-.01
.01-.02
.02-.03
.03-.04
.04-.05
.05-.06
.06-.07
.07-.08
.08-.09
.09-.10
.10-1.00
0
10
20
30
40
50
60
70
80
81
41
12
4
32
11
8
0
2
1
46
lgPR score
Number of objects
Figure 5: Histogram for query: g [Object content contains
"degenerative joint"]
p
Finally, we compared the ranking produced by lgPR and
PageRank. We apply PageRank to the entire data graph
of 28.5 million objects and 19.4 million links described in
section 4.1. For the three sample queries (described in the
next section), there are no PubMed ID's in common to the
Top 25, 50, 100 for each of the queries, except that the
top 50 of query with Lash term "allele" have 1 PubMed
publications in common, and top 100 of same query have 3
in common. We speculate that the link structure of the RG
is distinct compared to the link structure of the data graph;
hence applying lgPR to the RG results in dissimilar ranking
compared to a priori applying PageRank to the entire data
graph.
We summarize that the lgPR score can both identify those
objects with a very low ranking that may not be of interest
to the user. However, it can also be used to discriminate
amongst objects in the TOS whose ranking has a much lower
variation of scores. Finally, lgPR ranking is not the same
as that produced by PageRank applied to the entire data
graph.
4.3
User Evaluation
In our user evaluation of lgPR, we consider a set of complex
queries typical of a scientist searching for gene related
PubMed publications, and the Top K results of a word based
ranking technique (Iowa) that has been shown to be accurate
in answering gene queries [13]. We compare the Iowa Top
K publications with the lgPR Top K publications, for some
sample gene related queries.
We use criteria that reflect
both relevance and importance to identify characteristics of
lgPR.
Researchers are particularly interested in genetic and phe-notypic
variations associated with genes; these phenomena
are often studied in the context of diseases, in a chromoso-mal
region identified by a genomic marker (a unique known
sequence) associated with the disease. Genetic and pheno-typic
knowledge are described using terms of the Lash controlled
vocabulary [7]. We focus on a branch of the Lash
vocabulary that relates to phenotypes and population genetics
. Terms of interest include linkage disequilibrium,
quantitative trait locus and allele. Figure 6 presents
a portion of the Lash controlled vocabulary (term hierarchy
). LD is not listed as the synonym to the term linkage
disequilibrium, because LD may often refer to another concept
. In the following experiment, we did not consider the
plural form of some terms, such as alleles to allele, but this
can be extended in the future studies.
1. EPIGENETIC ALTERATION
2. GENOMIC SEGMENT LOSS
3. GENOMIC SEGMENT GAIN
4. GENOMIC SEQUENCE ALTERATION
5. PHENOTYPIC ASSOCIATION
(synonym: phenotype, trait)
(a) locus association (synonym: locus, loci)
i. linkage
ii. quantitative trait locus (synonym: QTL)
(b) allelic association (synonym: allele)
i. linkage disequilibrium
Figure 6: Branch 5 in Hierarchical controlled vocabulary
of genetics terms (Lash Controlled Vocabulary
)
The navigational query used in our evaluation experiment
can be described in English as follows: "Return all publications
in PubMed that are linked to an Entrez Gene entry
that is related to the human gene TNF (or its aliases). The
entry in PubMed must contain an STS marker and a term
from the Lash controlled vocabulary."
We used the query term "TNF AND 9606[TAXID]"
2
to
sample data from Entrez Gene. We then followed 8 paths to
PubMed. Table 3 reports on the number of entries in Entrez
Gene as well as the cardinality of the TOS for some sample
queries
3
.
We briefly describe the word-based ranking method (Iowa)
that focuses on ranking documents retrieved by PubMed
2
Note the Taxonomy ID for human is 9606 [4], and term
9606[TAXID] was used to select human genes.
3
We
use
g["tnf"
and
aliases
in
human]
to
denote
g[Object content contains
{"tnf" and aliases in human}];
the entries in the first column of Table 3 are similar.
32
Query
Cardinality of TOS
g["tnf" and aliases in human]
649
g["tnf" and aliases in human]
p[STS
2777
marker and "allele"]
g["tnf" and aliases in human]
p[STS
257
marker and "linkage disequilibrium"]
g["tnf" and aliases in human]
p[STS
22
marker and "quantitative trait locus"]
Table 3: Cardinality of TOS
for human gene queries [13], so that relevant documents are
ranked higher than non-relevant documents. This method
relies on using post-retrieval queries (ranking queries), au-tomatically
generated from an external source, viz., Entrez
Gene (Locus Link), to rank retrieved documents. The research
shows that ranking queries generated from a combination
of the Official Gene Symbol, Official Gene Name,
Alias Symbols, Entrez Summary, and Protein Products (optional
) were very effective in ranking relevant documents
higher in the retrieved list. Documents and ranking queries
are represented using the traditional vector-space representation
, commonly used in information retrieval.
Given a
gene, the cosine similarity score between the ranking query
vector for the gene and each document vector is computed.
Cosine scores are in the [0, 1] range and documents assigned
a higher score are ranked higher than documents with a
lower score. In the absence of summary and protein product
information, ranking queries generated from the gene
symbol, name and aliases are used to rank retrieved documents
. In this experiment study we are working on the Bio
Web documents alone.
We use the following criteria to compare the Top K results
from Iowa and lgPR, to understand basic characteristics of
the two methods. Criteria labeled R appear to judge the
relevance of the paper and those labeled I appear to judge
importance. Some criteria appear to judge both and are
labeled R,I.
1. R: Does the title or abstract of the article contain the
term TNF or its aliases in human? Does the article
discuss immune response?
2. R,I: Does the article contain any disease related terms?
Does the article contain any genomic components (genes,
markers, snps, sequences, etc.)?
3. R,I: Does the article discuss biological processes related
to the Lash terms?
4. R,I: What is the connectivity of the article to gene
entries in Entrez Gene that are related to TNF? Note
that as shown in Table 3, there are 649 Entrez Gene
entries that are related to human gene TNF. Each
PubMed publication was reached by following a result
path through the result graph RG that started
with one of these Entrez Gene entries. However, some
PubMed publications may have been reached along
multiple paths in the RG reflecting much greater connectivity
.
5. I: What is the category of the article (review, survey,
etc.). Does the article address some specific topics or
is it a broad brush article?
6. I: Where did the article appear? What is the journal
impact factor? Has the article been highly cited?
Top 10
Rel.
Imp.
Criteria
PMID
(0-5)
(0-5)
1.
2.
3.
4.
5.
6.
16271851
4
2
H
M
H
L
L
L
1946393
4
4
H
L
H
M
H
H
12217957
4
4
H
H
H
L
H
H
12545017
4
4
H
M
H
L
H
H
9757913
3
3
H
L
H
L
H
H
8882412
4
4
H
M
H
L
H
H
2674559
4
3
H
M
H
L
H
L
7495783
4
3
H
H
H
L
H
L
15976383
5
4
H
H
H
H
H
L
10698305
3
3
H
L
H
L
H
H
Table 4: Relevance and Importance of Top 10 Puli-cations
Reported by Iowa Ranking Method
Top 10
Rel.
Imp.
Criteria
PMID
(0-5)
(0-5)
1.
2.
3.
4.
5.
6.
7560085
5
5
H
H
H
H
H
H
12938093
5
5
H
H
H
H
H
H
10998471
3
3
M
H
H
L
H
L
11290834
5
4
H
H
H
H
H
L
11501950
4
3
H
H
H
L
H
L
11587067
5
4
H
H
H
H
H
L
11845411
2
4
L
H
H
L
H
H
12133494
5
4
H
H
H
H
H
L
12594308
4
4
H
H
H
L
H
H
12619925
5
5
H
H
H
H
H
H
Table 5: Relevance and Importance of Top 10 Puli-cations
Reported by lgPR Ranking Method
Tables 4 and 5 report Top 10 publications in PubMed that
are linked to an Entrez Gene entry that is related to human
gene TNF and contain the term linkage disequilibrium.
The first column reports the PubMed identifiers (PMIDs)
of the Top 10 publications returned by the Iowa and the
lgPR ranking methods. The human evaluation results are
reported in the fourth to the ninth columns using the the
six criteria listed above. An H represents the publication
is highly matched to the correspoinding criteria (M and L
represents medium and low respectively). An H indicates:
1. The PubMed entry is linked to the human gene TNF
with Entrez Gene identifier GeneID:7124.
2. The publication contains both diseases related terms
and genomic components.
3. The publication contains multiple Lash terms.
4. The connectivity is high, if there are more than five
related gene entries linked to the publication.
5. A research article considered more important than a
review or a survey, and a more specific topic is better.
6. The article is published in a journal with the impact
factor higher than 10.0, or the article is cited by ten
or more publications.
We then score the relevance (rel.) and the importance
(imp.) in the second and the third columns by combining
33
the number of H and M reported in the six criteria. Criteria
1 weighs twice compared to the other five criteria. We
use a number between 0 and 5, in which 5 indicates the
corresponding PubMed entry is highly relevant or highly
important to the given query. While both rankings appear
to identify "good" documents, Iowa appears to favor relevant
documents based on their word content. lgPR appears
to exploit the link structure of the RG, and have higher in-terconnectivity
to TNF related entries in Entrez gene. The
publications retrieved by lgPR are more likely to contain
diseases related terms or genomic components. The Iowa
ranking has a primary focus on the relevance of documents
(based on document contents; it is not able to differentiate
the importance of these relevant documents. In contrast,
lgPR has a primary focus on importance (based on the link
structure of the result graph); it is not able to differentiate
the relevance of important documents. We conclude that
further study is needed to determine how we can exploit the
characteristics of both methods.
There is no intersection between two sets of Top 10 publications
returned by these two ranking methods. The first
common PMID is 7935762, which is ranked 24 in the Iowa
method and 21 by the lgPR method.
CONCLUSIONS
We have defined a model for life science sources. The answer
to a navigational query are the target objects (TOS) of
a layered graph Result Graph (RG). We define two ranking
metrics layered graph PageRank (lgPR) and layered graph
ObjectRank (lgOR). We also report on the results of experiments
on real world data from NCBI/NIH. We show that
the ranking distribution of lgPR indeed discriminates among
the TOS objects of the RG. The lgPR distribution is not the
same as applying PageRank a priori to the data graph. We
perform a user experiment on complex queries typical of
a scientist searching for gene related PubMed publications,
and the Top K results of a word based ranking technique
(Iowa) that has been shown to be accurate in answering gene
queries the query. Using criteria that judge both relevance
and importance, we explore the characteristics of these two
rankings. Our preliminary evaluation indicates there may
be a benefit or a meta-ranking.
We briefly presented layered graph ObjectRank (lgOR)
which is an extension to ObjectRank.
The challenge of
ObjectRank is determining the correct authority weight for
each edge. For lgOR, we need to find the weight for the
edges that occur in RG. Experiments with users to determine
the correct authority weights for lgOR is planned for
future work. We expect that IR techniques can be used to
determine authority weights.
REFERENCES
[1] Andrey Balmin, Vagelis Hristidis, and Yannis
Papakonstantinou. Objectrank: Authority-based
keyword search in databases. In VLDB, pages
564575, 2004.
[2] Magdalini Eirinaki, Michalis Vazirgiannis, and
Dimitris Kapogiannis. Web path recommendations
based on page ranking and markov models. In WIDM
'05: Proceedings of the 7th annual ACM international
workshop on Web information and data management,
pages 29, New York, NY, USA, 2005. ACM Press.
[3] Taher H. Haveliwala. Topic-sensitive pagerank. In
WWW '02: Proceedings of the 11th international
conference on World Wide Web, pages 517526, New
York, NY, USA, 2002. ACM Press.
[4] Homo sapiens in NCBI Taxonomy Browser.
www.ncbi.nih.gov/Taxonomy/Browser/wwwtax.cgi?
mode=Info&id=9606.
[5] Sepandar D. Kamvar, Taher H. Haveliwala,
Christopher D. Manning, and Gene H. Golub.
Extrapolation methods for accelerating pagerank
computations. In WWW, pages 261270, 2003.
[6] Z. Lacroix, L. Raschid, and M.-E. Vidal. Semantic
model ot integrate biological resources. In
International Workshop on Semantic Web and
Databases (SWDB 2006), Atlanta, Georgia, USA, 3-7
April 2006.
[7] Alex Lash, Woei-Jyh Lee, and Louiqa Raschid. A
methodology to enhance the semantics of links
between PubMed publications and markers in the
human genome. In Fifth IEEE Symposium on
Bioinformatics and Bioengineering (BIBE 2005),
pages 185192, Minneapolis, Minnesota, USA, 19-21
October 2005.
[8] Carl D. Meyer. Matrix Analysis and Applied Linear
Algebra. Society for Industrial and Applied
Mathmatics, 2000.
[9] G. Mihaila, F. Naumann, L. Raschid, and M. Vidal. A
data model and query language to explore enhanced
links and paths in life sciences data sources.
Proceedings of the Workshop on Web and Databases,
WebDB, Maryland, USA, 2005.
[10] Rajeev Motwani and Prabhakar Raghavan.
Randomized algorithms. Cambridge University Press,
New York, NY, USA, 1995.
[11] Lawrence Page, Sergey Brin, Rajeev Motwani, and
Terry Winograd. The pagerank citation ranking:
Bringing order to the web. Technical report, Stanford
Digital Library Technologies Project, 1998.
[12] Matthew Richardson and Pedro Domingos. Combining
link and content information in web search. In Web
Dynamics '04: Web Dynamics - Adapting to Change
in Content, Size, Topology and Use, pages 179194.
Springer, 2004.
[13] Aditya Kumar Sehgal and Padmini Srinivasan.
Retrieval with gene queries. BMC Bioinformatics,
7:220, 2006.
[14] Maria-Esther Vidal, Louiqa Raschid, Natalia
M
arquez, Marelis C
ardenas, and Yao Wu. Query
rewriting in the semantic web. In InterDB, 2006.
34
| Navigational Query;Link Analysis;PageRank;Ranking |
165 | Ranking Web Objects from Multiple Communities | Vertical search is a promising direction as it leverages domain-specific knowledge and can provide more precise information for users. In this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine. More specifically, we focus on this problem in cases when objects lack relationships between different Web communities , and take high-quality photo search as the test bed for this investigation. We proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible. The proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums . Both intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described. Though the experiments were conducted on high-quality photo ranking , the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking | INTRODUCTION
Despite numerous refinements and optimizations, general
purpose search engines still fail to find relevant results for
many queries. As a new trend, vertical search has shown
promise because it can leverage domain-specific knowledge
and is more effective in connecting users with the information
they want.
There are many vertical search engines,
including some for paper search (e.g. Libra [21], Citeseer
[7] and Google Scholar [4]), product search (e.g. Froogle
[5]), movie search [6], image search [1, 8], video search [6],
local search [2], as well as news search [3]. We believe the
vertical search engine trend will continue to grow.
Essentially, building vertical search engines includes data
crawling, information extraction, object identification and
integration, and object-level Web information retrieval (or
Web object ranking) [20], among which ranking is one of the
most important factors. This is because it deals with the
core problem of how to combine and rank objects coming
from multiple communities.
Although object-level ranking has been well studied in
building vertical search engines, there are still some kinds
of vertical domains in which objects cannot be effectively
ranked. For example, algorithms that evolved from PageRank
[22], PopRank [21] and LinkFusion [27] were proposed
to rank objects coming from multiple communities, but can
only work on well-defined graphs of heterogeneous data.
"Well-defined" means that like objects (e.g. authors in paper
search) can be identified in multiple communities (e.g.
conferences). This allows heterogeneous objects to be well
linked to form a graph through leveraging all the relationships
(e.g. cited-by, authored-by and published-by) among
the multiple communities.
However, this assumption does not always stand for some
domains. High-quality photo search, movie search and news
search are exceptions. For example, a photograph forum
website usually includes three kinds of objects: photos, authors
and reviewers.
Yet different photo forums seem to
lack any relationships, as there are no cited-by relationships.
This makes it difficult to judge whether two authors cited
are the same author, or two photos are indeed identical photos
. Consequently, although each photo has a rating score
in a forum, it is non-trivial to rank photos coming from different
photo forums. Similar problems also exist in movie
search and news search. Although two movie titles can be
identified as the same one by title and director in different
movie discussion groups, it is non-trivial to combine rating
scores from different discussion groups and rank movies
effectively. We call such non-trivial object relationship in
which identification is difficult, incomplete relationships.
Other related work includes rank aggregation for the Web
[13, 14], and learning algorithm for rank, such as RankBoost
[15], RankSVM [17, 19], and RankNet [12]. We will contrast
differences of these methods with the proposed methods after
we have described the problem and our methods.
We will specifically focus on Web object-ranking problem
in cases that lack object relationships or have with incomplete
object relationships, and take high-quality photo
search as the test bed for this investigation. In the following,
we will introduce rationale for building high-quality photo
search.
1.1
High-Quality Photo Search
In the past ten years, the Internet has grown to become
an incredible resource, allowing users to easily access a huge
number of images. However, compared to the more than 1
billion images indexed by commercial search engines, actual
queries submitted to image search engines are relatively minor
, and occupy only 8-10 percent of total image and text
queries submitted to commercial search engines [24]. This
is partially because user requirements for image search are
far less than those for general text search. On the other
hand, current commercial search engines still cannot well
meet various user requirements, because there is no effective
and practical solution to understand image content.
To better understand user needs in image search, we conducted
a query log analysis based on a commercial search
engine.
The result shows that more than 20% of image
search queries are related to nature and places and daily
life categories. Users apparently are interested in enjoying
high-quality photos or searching for beautiful images of locations
or other kinds. However, such user needs are not
well supported by current image search engines because of
the difficulty of the quality assessment problem.
Ideally, the most critical part of a search engine the
ranking function can be simplified as consisting of two
key factors: relevance and quality. For the relevance factor
, search in current commercial image search engines provide
most returned images that are quite relevant to queries,
except for some ambiguity. However, as to quality factor,
there is still no way to give an optimal rank to an image.
Though content-based image quality assessment has been
investigated over many years [23, 25, 26], it is still far from
ready to provide a realistic quality measure in the immediate
future.
Seemingly, it really looks pessimistic to build an image
search engine that can fulfill the potentially large requirement
of enjoying high-quality photos. Various proliferating
Web communities, however, notices us that people today
have created and shared a lot of high-quality photos on the
Web on virtually any topics, which provide a rich source for
building a better image search engine.
In general, photos from various photo forums are of higher
quality than personal photos, and are also much more appealing
to public users than personal photos. In addition,
photos uploaded to photo forums generally require rich metadata
about title, camera setting, category and description to
be provide by photographers. These metadata are actually
the most precise descriptions for photos and undoubtedly
can be indexed to help search engines find relevant results.
More important, there are volunteer users in Web communities
actively providing valuable ratings for these photos.
The rating information is generally of great value in solving
the photo quality ranking problem.
Motivated by such observations, we have been attempting
to build a vertical photo search engine by extracting rich
metadata and integrating information form various photo
Web forums. In this paper, we specifically focus on how to
rank photos from multiple Web forums.
Intuitively, the rating scores from different photo forums
can be empirically normalized based on the number of photos
and the number of users in each forum. However, such
a straightforward approach usually requires large manual
effort in both tedious parameter tuning and subjective results
evaluation, which makes it impractical when there are
tens or hundreds of photo forums to combine. To address
this problem, we seek to build relationships/links between
different photo forums. That is, we first adopt an efficient
algorithm to find duplicate photos which can be considered
as hidden links connecting multiple forums. We then formulate
the ranking challenge as an optimization problem,
which eventually results in an optimal ranking function.
1.2
Main Contributions and Organization.
The main contributions of this paper are:
1. We have proposed and built a vertical image search engine
by leveraging rich metadata from various photo
forum Web sites to meet user requirements of searching
for and enjoying high-quality photos, which is impossible
in traditional image search engines.
2. We have proposed two kinds of Web object-ranking
algorithms for photos with incomplete relationships,
which can automatically and efficiently integrate as
many as possible Web communities with rating information
and achieves an equal qualitative result compared
with the manually tuned fusion scheme.
The rest of this paper is organized as follows. In Section
2, we present in detail the proposed solutions to the ranking
problem, including how to find hidden links between
different forums, normalize rating scores, obtain the optimal
ranking function, and contrast our methods with some
other related research. In Section 3, we describe the experimental
setting and experiments and user studies conducted
to evaluate our algorithm. Our conclusion and a discussion
of future work is in Section 4.
It is worth noting that although we treat vertical photo
search as the test bed in this paper, the proposed ranking
algorithm can also be applied to rank other content that
includes video clips, poems, short stories, drawings, sculptures
, music, and so on.
378
ALGORITHM
The difficulty of integrating multiple Web forums is in
their different rating systems, where there are generally two
kinds of freedom. The first kind of freedom is the rating
interval or rating scale including the minimal and maximal
ratings for each Web object. For example, some forums use
a 5-point rating scale whereas other forums use 3-point or
10-point rating scales. It seems easy to fix this freedom, but
detailed analysis of the data and experiments show that it
is a non-trivial problem.
The second kind of freedom is the varying rating criteria
found in different Web forums. That is, the same score does
not mean the same quality in different forums. Intuitively, if
we can detect same photographers or same photographs, we
can build relationships between any two photo forums and
therefore can standardize the rating criterion by score normalization
and transformation. Fortunately, we find that
quite a number of duplicate photographs exist in various
Web photo forums. This fact is reasonable when considering
that photographers sometimes submit a photo to more
than one forum to obtain critiques or in hopes of widespread
publicity. In this work, we adopt an efficient duplicate photo
detection algorithm [10] to find these photos.
The proposed methods below are based on the following
considerations. Faced with the need to overcome a ranking
problem, a standardized rating criterion rather than a reasonable
rating criterion is needed. Therefore, we can take
a large scale forum as the reference forum, and align other
forums by taking into account duplicate Web objects (duplicate
photos in this work). Ideally, the scores of duplicate
photos should be equal even though they are in different
forums. Yet we can deem that scores in different forums
except for the reference forum can vary in a parametric
space. This can be determined by minimizing the objective
function defined by the sum of squares of the score differences
. By formulating the ranking problem as an optimization
problem that attempts to make the scores of duplicate
photos in non-reference forums as close as possible to those
in the reference forum, we can effectively solve the ranking
problem.
For convenience, the following notations are employed.
S
ki
and
S
ki
denote the total score and mean score of
ith Web
object (photo) in the
kth Web site, respectively. The total
score refers to the sum of the various rating scores (e.g., novelty
rating and aesthetic rating), and the mean score refers
to the mean of the various rating scores. Suppose there are
a total of
K Web sites. We further use
{S
kl
i
|i = 1, ..., I
kl
;
k, l = 1, ..., K; k = l}
to denote the set of scores for Web objects (photos) in
kth
Web forums that are duplicate with the
lth Web forums,
where
I
kl
is the total number of duplicate Web objects between
these two Web sites. In general, score fusion can be
seen as the procedure of finding
K transforms
k
(
S
ki
)
=
e
S
ki
, k = 1, ..., K
such that e
S
ki
can be used to rank Web objects from different
Web sites. The objective function described in the above
Figure 1: Web community integration. Each Web
community forms a subgraph, and all communities
are linked together by some hidden links (dashed
lines).
paragraph can then be formulated as
min
{
k
|k=2,...,K}
K
X
k=2
I
k1
X
i=1
w
k
i
"S
1k
i
k
(
S
k1
i
)
"
2
(1)
where we use
k = 1 as the reference forum and thus
1
(
S
1i
) =
S
1i
.
w
k
i
(
0) is the weight coefficient that can be set heuris-tically
according to the numbers of voters (reviewers or com-menters
) in both the reference forum and the non-reference
forum. The more reviewers, the more popular the photo is
and the larger the corresponding weight
w
k
i
should be. In
this work, we do not inspect the problem of how to choose
w
k
i
and simply set them to one. But we believe the proper use
of
w
k
i
, which leverages more information, can significantly
improve the results.
Figure 1 illustrates the aforementioned idea. The Web
Community 1 is the reference community. The dashed lines
are links indicating that the two linked Web objects are actually
the same. The proposed algorithm will try to find the
best
k
(
k = 2, ..., K), which has certain parametric forms
according to certain models.
So as to minimize the cost
function defined in Eq. 1, the summation is taken on all the
red dashed lines.
We will first discuss the score normalization methods in
Section 2.2, which serves as the basis for the following work.
Before we describe the proposed ranking algorithms, we first
introduce a manually tuned method in Section 2.3, which is
laborious and even impractical when the number of communities
become large. In Section 2.4, we will briefly explain
how to precisely find duplicate photos between Web forums.
Then we will describe the two proposed methods: Linear fusion
and Non-linear fusion, and a performance measure for
result evaluation in Section 2.5. Finally, in Section 2.6 we
will discuss the relationship of the proposed methods with
some other related work.
2.2
Score Normalization
Since different Web (photo) forums on the Web usually
have different rating criteria, it is necessary to normalize
them before applying different kinds of fusion methods. In
addition, as there are many kinds of ratings, such as ratings
for novelty, ratings for aesthetics etc, it is reasonable
to choose a common one -- total score or average score -that
can always be extracted in any Web forum or calculated
by corresponding ratings. This allows the normaliza-379
tion method on the total score or average score to be viewed
as an impartial rating method between different Web forums
.
It is straightforward to normalize average scores by lin-early
transforming them to a fixed interval. We call this
kind of score as Scaled Mean Score. The difficulty, however,
of using this normalization method is that, if there are only
a few users rating an object, say a photo in a photo forum,
the average score for the object is likely to be spammed or
skewed.
Total score can avoid such drawbacks that contain more
information such as a Web object's quality and popularity.
The problem is thus how to normalize total scores in different
Web forums. The simplest way may be normalization
by the maximal and minimal scores. The drawback of this
normalization method is it is non robust, or in other words,
it is sensitive to outliers.
To make the normalization insensitive to unusual data,
we propose the Mode-90% Percentile normalization method.
Here, the mode score represents the total score that has been
assigned to more photos than any other total score. And The
high percentile score (e.g.,90%) represents the total score for
which the high percentile of images have a lower total score.
This normalization method utilizes the mode and 90% percentile
as two reference points to align two rating systems,
which makes the distributions of total scores in different forums
more consistent. The underlying assumption, for example
in different photo forums, is that even the qualities of
top photos in different forums may vary greatly and be less
dependent on the forum quality, the distribution of photos
of middle-level quality (from mode to 90% percentile) should
be almost of the same quality up to the freedom which reflects
the rating criterion (strictness) of Web forums. Photos
of this middle-level in a Web forum usually occupy more
than 70 % of total photos in that forum.
We will give more detailed analysis of the scores in Section
3.2.
2.3
Manual Fusion
The Web movie forum, IMDB [16], proposed to use a
Bayesian-ranking function to normalize rating scores within
one community. Motivated by this ranking function, we propose
this manual fusion method: For the
kth Web site, we
use the following formula
e
S
ki
=
k
,, n
k
S
ki
n
k
+
n
k
+ n
k
S
k
n
k
+
n
k
(2)
to rank photos, where
n
k
is the number of votes and
n
k
,
S
k
and
k
are three parameters. This ranking function first
takes a balance between the original mean score
S
ki
and a
reference score
S
k
to get a weighted mean score which may
be more reliable than
S
ki
. Then the weighted mean score is
scaled by
k
to get the final score f
S
ki
.
For
n Web communities, there are then about 3n parameters
in
{(
k
, n
k
, S
k
)
|k = 1, ..., n} to tune. Though this
method can achieves pretty good results after careful and
thorough manual tuning on these parameters, when
n becomes
increasingly large, say there are tens or hundreds of
Web communities crawled and indexed, this method will become
more and more laborious and will eventually become
impractical. It is therefore desirable to find an effective fusion
method whose parameters can be automatically determined
.
2.4
Duplicate Photo Detection
We use Dedup [10], an efficient and effective duplicate image
detection algorithm, to find duplicate photos between
any two photo forums. This algorithm uses hash function
to map a high dimensional feature to a 32 bits hash code
(see below for how to construct the hash code). Its computational
complexity to find all the duplicate images among
n images is about O(n log n). The low-level visual feature
for each photo is extracted on
k k regular grids. Based
on all features extracted from the image database, a PCA
model is built. The visual features are then transformed to
a relatively low-dimensional and zero mean PCA space, or
29 dimensions in our system. Then the hash code for each
photo is built as follows: each dimension is transformed to
one, if the value in this dimension is greater than 0, and 0
otherwise. Photos in the same bucket are deemed potential
duplicates and are further filtered by a threshold in terms
of Euclidean similarity in the visual feature space.
Figure 2 illustrates the hashing procedure, where visual
features -- mean gray values -- are extracted on both 6
6
and 7
7 grids. The 85-dimensional features are transformed
to a 32-dimensional vector, and the hash code is generated
according to the signs.
Figure 2:
Hashing procedure for duplicate photo
dectection
2.5
Score Fusion
In this section, we will present two solutions on score fusion
based on different parametric form assumptions of
k
in Eq. 1.
2.5.1
Linear Fusion by Duplicate Photos
Intuitively, the most straightforward way to factor out the
uncertainties caused by the different criterion is to scale, rel-380
ative to a given center, the total scores of each unreferenced
Web photo forum with respect to the reference forum. More
strictly, we assume
k
has the following form
k
(
S
ki
)
=
k
S
ki
+
t
k
, k = 2, ..., K
(3)
1
(
S
1i
)
=
S
1i
(4)
which means that the scores of
k(= 1)th forum should be
scaled by
k
relative to the center
t
k
1k
as shown in Figure
3.
Then, if we substitute above
k
to Eq. 1, we get the
following objective function,
min
{
k
,t
k
|k=2,...,K}
K
X
k=2
I
k1
X
i=1
w
k
i
hS
1k
i
k
S
k1
i
- t
k
i
2
. (5)
By solving the following set of functions,
(
f
k
=
=
0
f
t
k
=
0 ,
k = 1, ..., K
where
f is the objective function defined in Eq. 5, we get
the closed form solution as:
,,
k
t
k
=
A
-1
k
L
k
(6)
where
A
k
=
,, P
i
w
i
(
S
k1
i
)
2
P
i
w
i
S
k1
i
P
i
w
i
S
k1
i
P
i
w
i
(7)
L
k
=
,, P
i
w
i
S
1k
i
S
k1
i
P
i
w
i
S
1k
i
(8)
and
k = 2, ..., K.
This is a linear fusion method. It enjoys simplicity and
excellent performance in the following experiments.
Figure 3: Linear Fusion method
2.5.2
Nonlinear Fusion by Duplicate Photos
Sometimes we want a method which can adjust scores on
intervals with two endpoints unchanged. As illustrated in
Figure 4, the method can tune scores between [
C
0
, C
1
] while
leaving scores
C
0
and
C
1
unchanged. This kind of fusion
method is then much finer than the linear ones and contains
many more parameters to tune and expect to further
improve the results.
Here, we propose a nonlinear fusion solution to satisfy
such constraints. First, we introduce a transform:
c
0
,c
1
,
(
x) =
( "
x-c
0
c
1
-c
0
"
(
c
1
- c
0
) +
c
0
, if x (c
0
, c
1
]
x
otherwise
where
> 0. This transform satisfies that for x [c
0
, c
1
],
c
0
,c
1
,
(
x) [c
0
, c
1
] with
c
0
,c
1
,
(
c
0
) =
c
0
and
c
0
,c
1
,
(
c
1
) =
c
1
. Then we can utilize this nonlinear transform to adjust
the scores in certain interval, say (
M, T ],
k
(
S
ki
)
=
M,T,
(
S
ki
)
.
(9)
Figure 4: Nonlinear Fusion method. We intent to
finely adjust the shape of the curves in each segment.
Even there is no closed-form solution for the following
optimization problem,
min
{
k
|k[2,K]}
K
X
k=2
I
k1
X
i=1
w
k
i
hS
1k
i
M
,T,
(
S
ki
)
i
2
it is not hard to get the numeric one. Under the same assumptions
made in Section 2.2, we can use this method to
adjust scores of the middle-level (from the mode point to
the 90 % percentile).
This more complicated non-linear fusion method is expected
to achieve better results than the linear one. However
, difficulties in evaluating the rank results block us from
tuning these parameters extensively. The current experiments
in Section 3.5 do not reveal any advantages over the
simple linear model.
2.5.3
Performance Measure of the Fusion Results
Since our objective function is to make the scores of the
same Web objects (e.g. duplicate photos) between a non-reference
forum and the reference forum as close as possible,
it is natural to investigate how close they become to each
other and how the scores of the same Web objects change
between the two non-reference forums before and after score
fusion.
Taken Figure 1 as an example, the proposed algorithms
minimize the score differences of the same Web objects in
two Web forums: the reference forum (the Web Community
1) and a non-reference forum, which corresponds to minimizing
the objective function on the red dashed (hidden)
links. After the optimization, we must ask what happens to
the score differences of the same Web objects in two non-reference
forums? Or, in other words, whether the scores
of two objects linked by the green dashed (hidden) links
become more consistent?
We therefore define the following performance measure -measure
-- to quantify the changes for scores of the same
Web objects in different Web forums as
kl
=
Sim(S
lk
, S
kl
)
- Sim(S
lk
, S
kl
)
(10)
381
where S
kl
= (
S
kl
1
, ..., S
kl
I
kl
)
T
, S
kl
= ( e
S
kl
1
, ..., e
S
kl
I
kl
)
T
and
Sim(a
, b) =
a
b
||a||||b|| .
kl
> 0 means after score fusion, scores on the same Web
objects between
kth and lth Web forum become more consistent
, which is what we expect. On the contrary, if
kl
< 0,
those scores become more inconsistent.
Although we cannot rely on this measure to evaluate our
final fusion results as ranking photos by their popularity and
qualities is such a subjective process that every person can
have its own results, it can help us understand the intermediate
ranking results and provide insights into the final
performances of different ranking methods.
2.6
Contrasts with Other Related Work
We have already mentioned the differences of the proposed
methods with the traditional methods, such as PageRank
[22], PopRank [21], and LinkFusion [27] algorithms in Section
1. Here, we discuss some other related works.
The current problem can also be viewed as a rank aggregation
one [13, 14] as we deal with the problem of how to
combine several rank lists. However, there are fundamental
differences between them. First of all, unlike the Web
pages, which can be easily and accurately detected as the
same pages, detecting the same photos in different Web forums
is a non-trivial work, and can only be implemented by
some delicate algorithms while with certain precision and
recall. Second, the numbers of the duplicate photos from
different Web forums are small relative to the whole photo
sets (see Table 1). In another words, the top
K rank lists
of different Web forums are almost disjointed for a given
query. Under this condition, both the algorithms proposed
in [13] and their measurements -- Kendall tau distance or
Spearman footrule distance -- will degenerate to some trivial
cases.
Another category of rank fusion (aggregation) methods is
based on machine learning algorithms, such as RankSVM
[17, 19], RankBoost [15], and RankNet [12]. All of these
methods entail some labelled datasets to train a model. In
current settings, it is difficult or even impossible to get these
datasets labelled as to their level of professionalism or popularity
, since the photos are too vague and subjective to rank.
Instead, the problem here is how to combine several ordered
sub lists to form a total order list.
EXPERIMENTS
In this section, we carry out our research on high-quality
photo search. We first briefly introduce the newly proposed
vertical image search engine -- EnjoyPhoto in section 3.1.
Then we focus on how to rank photos from different Web
forums.
In order to do so, we first normalize the scores
(ratings) for photos from different multiple Web forums in
section 3.2. Then we try to find duplicate photos in section
3.3. Some intermediate results are discussed using
measure
in section 3.4. Finally a set of user studies is carried out
carefully to justify our proposed method in section 3.5.
3.1
EnjoyPhoto: high-quality Photo Search
Engine
In order to meet user requirement of enjoying high-quality
photos, we propose and build a high-quality photo search engine
-- EnjoyPhoto, which accounts for the following three
key issues: 1. how to crawl and index photos, 2. how to
determine the qualities of each photo and 3. how to display
the search results in order to make the search process
enjoyable. For a given text based query, this system ranks
the photos based on certain combination of relevance of the
photo to this query (Issue 1) and the quality of the photo
(Issue 2), and finally displays them in an enjoyable manner
(Issue 3).
As for Issue 3, we devise the interface of the system de-liberately
in order to smooth the users' process of enjoying
high-quality photos. Techniques, such as Fisheye and slides
show, are utilized in current system. Figure 5 shows the
interface. We will not talk more about this issue as it is not
an emphasis of this paper.
Figure 5:
EnjoyPhoto:
an enjoyable high-quality
photo search engine, where 26,477 records are returned
for the query "fall" in about 0.421 seconds
As for Issue 1, we extracted from a commercial search engine
a subset of photos coming from various photo forums
all over the world, and explicitly parsed the Web pages containing
these photos. The number of photos in the data collection
is about 2.5 million. After the parsing, each photo
was associated with its title, category, description, camera
setting, EXIF data
1
(when available for digital images), location
(when available in some photo forums), and many
kinds of ratings. All these metadata are generally precise
descriptions or annotations for the image content, which are
then indexed by general text-based search technologies [9,
18, 11]. In current system, the ranking function was specifically
tuned to emphasize title, categorization, and rating
information.
Issue 2 is essentially dealt with in the following sections
which derive the quality of photos by analyzing ratings provided
by various Web photo forums. Here we chose six photo
forums to study the ranking problem and denote them as
Web-A, Web-B, Web-C, Web-D, Web-E and Web-F.
3.2
Photo Score Normalization
Detailed analysis of different score normalization methods
are analyzed in this section. In this analysis, the zero
1
Digital cameras save JPEG (.jpg) files with EXIF (Exchangeable
Image File) data. Camera settings and scene
information are recorded by the camera into the image file.
www.digicamhelp.com/what-is-exif/
382
0
2
4
6
8
10
0
1000
2000
3000
4000
Normalized Score
Total Number
(a) Web-A
0
2
4
6
8
10
0
0.5
1
1.5
2
2.5
3 x 10
4
Normalized Score
Total Number
(b) Web-B
0
2
4
6
8
10
0
0.5
1
1.5
2 x 10
5
Normalized Score
Total Number
(c) Web-C
0
2
4
6
8
10
0
2
4
6
8
10 x 10
4
Normalized Score
Total Number
(d) Web-D
0
2
4
6
8
10
0
2000
4000
6000
8000
10000
12000
14000
Normalized Score
Total Number
(e) Web-E
0
2
4
6
8
10
0
1
2
3
4
5
6 x 10
4
Normalized Score
Total Number
(f) Web-F
Figure 6: Distributions of mean scores normalized
to [0
, 10]
scores that usually occupy about than 30% of the total number
of photos for some Web forums are not currently taken
into account. How to utilize these photos is left for future
explorations.
In Figure 6, we list the distributions of the mean score,
which is transformed to a fixed interval [0
, 10]. The distributions
of the average scores of these Web forums look quite
different. Distributions in Figure 6(a), 6(b), and 6(e) looks
like Gaussian distributions, while those in Figure 6(d) and
6(f) are dominated by the top score. The reason of these
eccentric distributions for Web-D and Web-F lies in their
coarse rating systems. In fact, Web-D and Web-F use 2 or
3 point rating scales whereas other Web forums use 7 or 14
point rating scales. Therefore, it will be problematic if we
directly use these averaged scores. Furthermore the average
score is very likely to be spammed, if there are only a few
users rating a photo.
Figure 7 shows the total score normalization method by
maximal and minimal scores, which is one of our base line
system. All the total scores of a given Web forum are normalized
to [0
, 100] according to the maximal score and minimal
score of corresponding Web forum. We notice that total
score distribution of Web-A in Figure 7(a) has two larger
tails than all the others. To show the shape of the distributions
more clearly, we only show the distributions on [0
, 25]
in Figure 7(b),7(c),7(d),7(e), and 7(f).
Figure 8 shows the Mode-90% Percentile normalization
method, where the modes of the six distributions are normalized
to 5 and the 90% percentile to 8. We can see that
this normalization method makes the distributions of total
scores in different forums more consistent. The two proposed
algorithms are all based on these normalization methods.
3.3
Duplicate photo detection
Targeting at computational efficiency, the Dedup algorithm
may lose some recall rate, but can achieve a high
precision rate. We also focus on finding precise hidden links
rather than all hidden links. Figure 9 shows some duplicate
detection examples. The results are shown in Table 1 and
verify that large numbers of duplicate photos exist in any
two Web forums even with the strict condition for Dedup
where we chose first 29 bits as the hash code. Since there
are only a few parameters to estimate in the proposed fusion
methods, the numbers of duplicate photos shown Table 1 are
0
20
40
60
80
100
0
100
200
300
400
500
600
Normalized Score
Total Number
(a) Web-A
0
5
10
15
20
25
0
1
2
3
4
5 x 10
4
Normalized Score
Total Number
(b) Web-B
0
5
10
15
20
25
0
1
2
3
4
5 x 10
5
Normalized Score
Total Number
(c) Web-C
0
5
10
15
20
25
0
0.5
1
1.5
2
2.5 x 10
4
Normalized Score
Total Number
(d) Web-D
0
5
10
15
20
25
0
2000
4000
6000
8000
10000
Normalized Score
Total Number
(e) Web-E
0
5
10
15
20
25
0
0.5
1
1.5
2
2.5
3 x 10
4
Normalized Score
Total Number
(f) Web-F
Figure 7: Maxmin Normalization
0
5
10
15
0
200
400
600
800
1000
1200
1400
Normalized Score
Total Number
(a) Web-A
0
5
10
15
0
1
2
3
4
5 x 10
4
Normalized Score
Total Number
(b) Web-B
0
5
10
15
0
2
4
6
8
10
12
14 x 10
4
Normalized Score
Total Number
(c) Web-C
0
5
10
15
0
0.5
1
1.5
2
2.5 x 10
4
Normalized Score
Total Number
(d) Web-D
0
5
10
15
0
2000
4000
6000
8000
10000
12000
Normalized Score
Total Number
(e) Web-E
0
5
10
15
0
2000
4000
6000
8000
10000
Normalized Score
Total Number
(f) Web-F
Figure 8: Mode-90% Percentile Normalization
sufficient to determine these parameters. The last table column
lists the total number of photos in the corresponding
Web forums.
3.4
Measure
The parameters of the proposed linear and nonlinear algorithms
are calculated using the duplicate data shown in
Table 1, where the Web-C is chosen as the reference Web
forum since it shares the most duplicate photos with other
forums.
Table 2 and 3 show the
measure on the linear model and
nonlinear model. As
kl
is symmetric and
kk
= 0, we only
show the upper triangular part. The NaN values in both
tables lie in that no duplicate photos have been detected by
the Dedup algorithm as reported in Table 1.
The linear model guarantees that the
measures related
Table 1: Number of duplicate photos between each
pair of Web forums
A
B
C
D
E
F
Scale
A
0
316
1,386
178
302
0
130k
B
316
0
14,708
909
8,023
348
675k
C
1,386
14,708
0
1,508
19,271
1,083
1,003k
D
178
909
1,508
0
1,084
21
155k
E
302
8,023
19,271
1,084
0
98
448k
F
0
348
1,083
21
98
0
122k
383
Figure 9: Some results of duplicate photo detection
Table 2:
measure on the linear model.
Web-B
Web-C
Web-D
Web-E
Web-F
Web-A
0.0659
0.0911
0.0956
0.0928
NaN
Web-B
0.0672
0.0578
0.0791
0.4618
Web-C
0.0105
0.0070
0.2220
Web-D
0.0566
0.0232
Web-E
0.6525
to the reference community should be no less than 0 theo-retically
. It is indeed the case (see the underlined numbers
in Table 2). But this model can not guarantee that the
measures on the non-reference communities can also be no
less than 0, as the normalization steps are based on duplicate
photos between the reference community and a non-reference
community. Results shows that all the numbers in
the
measure are greater than 0 (see all the non-underlined
numbers in Table 2), which indicates that it is probable that
this model will give optimal results.
On the contrary, the nonlinear model does not guarantee
that
measures related to the reference community should
be no less than 0, as not all duplicate photos between the
two Web forums can be used when optimizing this model.
In fact, the duplicate photos that lie in different intervals
will not be used in this model. It is these specific duplicate
photos that make the
measure negative. As a result, there
are both negative and positive items in Table 3, but overall
the number of positive ones are greater than negative ones
(9:5), that indicates the model may be better than the "nor-malization
only" method (see next subsection) which has an
all-zero
measure, and worse than the linear model.
3.5
User Study
Because it is hard to find an objective criterion to evaluate
Table 3:
measure on the nonlinear model.
Web-B
Web-C
Web-D
Web-E
Web-F
Web-A
0.0559
0.0054
-0.0185
-0.0054
NaN
Web-B
-0.0162
-0.0345
-0.0301
0.0466
Web-C
0.0136
0.0071
0.1264
Web-D
0.0032
0.0143
Web-E
0.214
which ranking function is better, we chose to employ user
studies for subjective evaluations. Ten subjects were invited
to participate in the user study. They were recruited from
nearby universities. As search engines of both text search
and image search are familiar to university students, there
was no prerequisite criterion for choosing students.
We conducted user studies using Internet Explorer 6.0 on
Windows XP with 17-inch LCD monitors set at 1,280 pixels
by 1,024 pixels in 32-bit color.
Data was recorded with
server logs and paper-based surveys after each task.
Figure 10: User study interface
We specifically device an interface for user study as shown
in Figure 10. For each pair of fusion methods, participants
were encouraged to try any query they wished. For those
without specific ideas, two combo boxes (category list and
query list) were listed on the bottom panel, where the top
1,000 image search queries from a commercial search engine
were provided. After a participant submitted a query, the
system randomly selected the left or right frame to display
each of the two ranking results. The participant were then
required to judge which ranking result was better of the two
ranking results, or whether the two ranking results were of
equal quality, and submit the judgment by choosing the corresponding
radio button and clicking the "Submit" button.
For example, in Figure 10, query "sunset" is submitted to
the system. Then, 79,092 photos were returned and ranked
by the Minmax fusion method in the left frame and linear
fusion method in the right frame. A participant then compares
the two ranking results (without knowing the ranking
methods) and submits his/her feedback by choosing answers
in the "Your option."
Table 4: Results of user study
Norm.Only
Manually
Linear
Linear
29:13:10
14:22:15
-Nonlinear
29:15:9
12:27:12
6:4:45
Table 4 shows the experimental results, where "Linear"
denotes the linear fusion method, "Nonlinear" denotes the
non linear fusion method, "Norm. Only" means Maxmin
normalization method, "Manually" means the manually tuned
method.
The three numbers in each item, say 29:13:10,
mean that 29 judgments prefer the linear fusion results, 10
384
judgments prefer the normalization only method, and 13
judgments consider these two methods as equivalent.
We conduct the ANOVA analysis, and obtain the following
conclusions:
1. Both the linear and nonlinear methods are significantly
better than the "Norm. Only" method with respective
P-values 0
.00165(< 0.05) and 0.00073(<< 0.05). This
result is consistent with the
-measure evaluation result
. The "Norm. Only" method assumes that the top
10% photos in different forums are of the same quality
. However, this assumption does not stand in general
. For example, a top 10% photo in a top tier photo
forum is generally of higher quality than a top 10%
photo in a second-tier photo forum. This is similar
to that, those top 10% students in a top-tier university
and those in a second-tier university are generally
of different quality. Both linear and nonlinear fusion
methods acknowledge the existence of such differences
and aim at quantizing the differences. Therefore, they
perform better than the "Norm. Only" method.
2. The linear fusion method is significantly better than
the nonlinear one with P-value 1
.195 10
-10
. This
result is rather surprising as this more complicated
ranking method is expected to tune the ranking more
finely than the linear one. The main reason for this
result may be that it is difficult to find the best intervals
where the nonlinear tuning should be carried out
and yet simply the middle part of the Mode-90% Percentile
Normalization method was chosen. The time-consuming
and subjective evaluation methods -- user
studies -- blocked us extensively tuning these parameters
.
3. The proposed linear and nonlinear methods perform
almost the same with or slightly better than the manually
tuned method. Given that the linear/nonlinear
fusion methods are fully automatic approaches, they
are considered practical and efficient solutions when
more communities (e.g. dozens of communities) need
to be integrated.
CONCLUSIONS AND FUTURE WORK
In this paper, we studied the Web object-ranking problem
in the cases of lacking object relationships where traditional
ranking algorithms are no longer valid, and took
high-quality photo search as the test bed for this investigation
. We have built a vertical high-quality photo search
engine, and proposed score fusion methods which can automatically
integrate as many data sources (Web forums) as
possible. The proposed fusion methods leverage the hidden
links discovered by duplicate photo detection algorithm, and
minimize score differences of duplicate photos in different
forums. Both the intermediate results and the user studies
show that the proposed fusion methods are a practical
and efficient solution to Web object ranking in the aforesaid
relationships. Though the experiments were conducted
on high-quality photo ranking, the proposed algorithms are
also applicable to other kinds of Web objects including video
clips, poems, short stories, music, drawings, sculptures, and
so on.
Current system is far from being perfect. In order to make
this system more effective, more delicate analysis for the
vertical domain (e.g., Web photo forums) are needed. The
following points, for example, may improve the searching
results and will be our future work: 1. more subtle analysis
and then utilization of different kinds of ratings (e.g.,
novelty ratings, aesthetic ratings); 2. differentiating various
communities who may have different interests and preferences
or even distinct culture understandings; 3. incorporating
more useful information, including photographers' and
reviewers' information, to model the photos in a heterogeneous
data space instead of the current homogeneous one.
We will further utilize collaborative filtering to recommend
relevant high-quality photos to browsers.
One open problem is whether we can find an objective and
efficient criterion for evaluating the ranking results, instead
of employing subjective and inefficient user studies, which
blocked us from trying more ranking algorithms and tuning
parameters in one algorithm.
ACKNOWLEDGMENTS
We thank Bin Wang and Zhi Wei Li for providing Dedup
codes to detect duplicate photos; Zhen Li for helping us
design the interface of EnjoyPhoto; Ming Jing Li, Longbin
Chen, Changhu Wang, Yuanhao Chen, and Li Zhuang etc.
for useful discussions. Special thanks go to Dwight Daniels
for helping us revise the language of this paper.
REFERENCES
[1] Google image search. http://images.google.com.
[2] Google local search. http://local.google.com/.
[3] Google news search. http://news.google.com.
[4] Google paper search. http://Scholar.google.com.
[5] Google product search. http://froogle.google.com.
[6] Google video search. http://video.google.com.
[7] Scientific literature digital library.
http://citeseer.ist.psu.edu.
[8] Yahoo image search. http://images.yahoo.com.
[9] R. Baeza-Yates and B. Ribeiro-Neto. Modern
Information Retrieval. New York: ACM Press;
Harlow, England: Addison-Wesley, 1999.
[10] W. Bin, L. Zhiwei, L. Ming Jing, and M. Wei-Ying.
Large-scale duplicate detection for web image search.
In Proceedings of the International Conference on
Multimedia and Expo, page 353, 2006.
[11] S. Brin and L. Page. The anatomy of a large-scale
hypertextual web search engine. In Computer
Networks, volume 30, pages 107117, 1998.
[12] C. Burges, T. Shaked, E. Renshaw, A. Lazier,
M. Deeds, N. Hamilton, and G. Hullender. Learning
to rank using gradient descent. In Proceedings of the
22nd international conference on Machine learning,
pages 89 96, 2005.
[13] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar.
Rank aggregation methods for the web. In Proceedings
10th International Conference on World Wide Web,
pages 613 622, Hong-Kong, 2001.
[14] R. Fagin, R. Kumar, and D. Sivakumar. Comparing
top k lists. SIAM Journal on Discrete Mathematics,
17(1):134 160, 2003.
[15] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An
efficient boosting algorithm for combining preferences.
385
Journal of Machine Learning Research,
4(1):933969(37), 2004.
[16] IMDB. Formula for calculating the top rated 250 titles
in imdb. http://www.imdb.com/chart/top.
[17] T. Joachims. Optimizing search engines using
clickthrough data. In Proceedings of the eighth ACM
SIGKDD international conference on Knowledge
discovery and data mining, pages 133 142, 2002.
[18] J. M. Kleinberg. Authoritative sources in a
hyperlinked environment. Journal of the ACM,
46(5):604632, 1999.
[19] R. Nallapati. Discriminative models for information
retrieval. In Proceedings of the 25th annual
international ACM SIGIR conference on Research and
development in information retrieval, pages 64 71,
2004.
[20] Z. Nie, Y. Ma, J.-R. Wen, and W.-Y. Ma. Object-level
web information retrieval. In Technical Report of
Microsoft Research, volume MSR-TR-2005-11, 2005.
[21] Z. Nie, Y. Zhang, J.-R. Wen, and W.-Y. Ma.
Object-level ranking: Bringing order to web objects.
In Proceedings of the 14th international conference on
World Wide Web, pages 567 574, Chiba, Japan,
2005.
[22] L. Page, S. Brin, R. Motwani, and T. Winograd. The
pagerank citation ranking: Bringing order to the web.
In Technical report, Stanford Digital Libraries, 1998.
[23] A. Savakis, S. Etz, and A. Loui. Evaluation of image
appeal in consumer photography. In SPIE Human
Vision and Electronic Imaging, pages 111120, 2000.
[24] D. Sullivan. Hitwise search engine ratings. Search
Engine Watch Articles, http://searchenginewatch.
com/reports/article.php/3099931, August 23, 2005.
[25] S. Susstrunk and S. Winkler. Color image quality on
the internet. In IS&T/SPIE Electronic Imaging 2004:
Internet Imaging V, volume 5304, pages 118131,
2004.
[26] H. Tong, M. Li, Z. H.J., J. He, and Z. C.S.
Classification of digital photos taken by photographers
or home users. In Pacific-Rim Conference on
Multimedia (PCM), pages 198205, 2004.
[27] W. Xi, B. Zhang, Z. Chen, Y. Lu, S. Yan, W.-Y. Ma,
and E. A. Fox. Link fusion: a unified link analysis
framework for multi-type interrelated data objects. In
Proceedings of the 13th international conference on
World Wide Web, pages 319 327, 2004.
386
| image search;ranking;Web objects |
166 | Real-world Oriented Information Sharing Using Social Networks | While users disseminate various information in the open and widely distributed environment of the Semantic Web, determination of who shares access to particular information is at the center of looming privacy concerns. We propose a real-world -oriented information sharing system that uses social networks. The system automatically obtains users' social relationships by mining various external sources. It also enables users to analyze their social networks to provide awareness of the information dissemination process. Users can determine who has access to particular information based on the social relationships and network analysis. | INTRODUCTION
With the current development of tools and sites that enable
users to create Web content, users have become able to
easily disseminate various information. For example, users
create Weblogs, which are diary-like sites that include various
public and private information. Furthermore, the past
year has witnessed the emergence of social networking sites
that allow users to maintain an online network of friends
or associates for social or business purposes. Therein, data
related to millions of people and their relationships are publicly
available on the Web.
Although these tools and sites enable users to easily disseminate
information on the Web, users sometimes have difficulty
in sharing information with the right people and frequently
have privacy concerns because it is difficult to determine
who has access to particular information on such
applications. Some tools and applications provide control
over information access. For example, Friendster, a huge
social networking site, offers several levels of control from
"public information" to "only for friends". However, it provides
only limited support for access control.
An appropriate information sharing system that enables
all users to control the dissemination of their information
is needed to use tools and sites such as Weblog, Wiki, and
social networking services fully as an infrastructure of disseminating
and sharing information. In the absence of such
a system, a user would feel unsafe and would therefore be
discouraged from disseminating information.
How can we realize such an information sharing system
on the Web? One clue exists in the information sharing
processes of the real world. Information availability is often
closely guarded and shared only with the people of one's
social relationships. Confidential project documents which
have limited distribution within a division of company, might
be made accessible to other colleagues who are concerned
with the project. Private family photographs might be shared
not only with relatives, but also with close friends. A professor
might access a private research report of her student.
We find that social relationships play an important role in
the process of disseminating and receiving information. This
paper presents a real-world oriented information sharing system
using social networks. It enables users to control the
information dissemination process within social networks.
The remainder of this paper is organized as follows: section
2 describes the proposed information sharing system
using social networks. In section 3, we describe the application
of our system. Finally, we conclude this paper in
section 4.
INFORMATION SHARING USING SOCIAL NETWORKS
Figure 1 depicts the architecture of the proposed information
sharing system. The system functions as a "plug-in"
for applications so that external applications enable users
to leverage social networks to manage their information dis-81
Social network analysis
Applications for Information Sharing
e.g. Weblog, Wiki, CMS, SNS etc
Contents
Data
Access control List Editor
Access control
Data (XACML)
Social networks extraction
Web
Web pages,
SNS, FOAF, etc
Email
Sensors
Social networks Editor
Social networks
Data (FOAF)
Edit social networks
Edit access control list
Contents data
Access to contents
Access permission
User data
Figure 1: Architecture of the proposed information
sharing system
Birthplace
: Kagawa, Japan
Workplace
: AIST
Job
: CS researcher
Research topics
: Web
University
: Tokyo univ.
Interest
: Sumo wrestling
...
person
Properties
Birthplace
: Los Angels, US
Workplace
: Washington Univ.
Job
: CS researcher
Research topics
: Web
University
: UC California
Interest
: Sumo wrestling
...
person
Properties
Birthplace
: Kagawa, Japan
Workplace
: AIST
Job
: CS researcher
Research topics
: Web
University
: Tokyo univ.
Interest
: Sumo wrestling
...
person
Properties
Birthplace
: Los Angels, US
Workplace
: Washington Univ.
Job
: CS researcher
Research topics
: Web
University
: UC California
Interest
: Sumo wrestling
...
person
Properties
person
person
Event
Participate
Common
Event-participation relationship
Common property relationship
Figure 2: Two kinds of relationships
semination. A user can attach an access control list to his
content using his social network when creating content on
an application. Then, when the application receives a request
to access the content, it determines whether to grant
the request based on the access control list.
Because users determine the access control to information
based on the social network, the system requires social
network data. The system obtains users' social networks automatically
by mining various external sources such as Web,
emails, and sensor information; subsequently, it maintains a
database of the social network information. Users can adjust
the network if necessary.
The system enables users to analyze their social network
to provide awareness of the information dissemination process
within the social network. Using social relationships
and the results of social network analyses, users can decide
who can access their information.
Currently, the proposed system is applied to an academic
society because researchers have various social relationships
(e.g., from a student to a professor, from a company to a university
) through their activities such as meetings, projects,
and conferences. Importantly, they often need to share various
information such as papers, ideas, reports, and schedules
. Sometimes, such information includes private or confidential
information that ought only to be shared with appropriate
people. In addition, researchers have an interest
in managing the information availability of their social relationships
. The information of social relationships of an
academic society, in particular computer science, is easily
available online to a great degree. Such information is important
to obtain social networks automatically.
Hereafter, we explain in detail how social networks are
modeled, extracted and analyzed.
Then we explain how
users can decide to control information access using social
networks.
2.1
Representation of Social Relationships
With the variety of social relationships that exist in the
real world, a salient problem has surfaced: integration and
consolidation on a semantic basis. The representation of
social relationships must be sufficiently fine-grained that we
can capture all details from individual sources of information
in a way that these can be recombined later and taken as
evidence of a certain relationship.
Several representations of social relationships exist. For
example, social network sites often simplify the relationship
as "friend" or "acquaintance". In the Friend of a Friend
(FOAF) [1] vocabulary, which is one of the Semantic Web's
largest and most popular ontologies for describing people
and whom they know, many kinds of relationships between
people are deliberately simplified as "knows" relations. A
rich ontological consideration of social relationships is needed
for characterization and analysis of individual social networks
.
We define two kinds of social relationship (Fig. 2) [7].
The first basic structure of social relationship is a person's
participation in an event. Social relationships come into existence
through events involving two or more individuals.
Such events might not require personal contact, but they
must involve social interaction. From this event, social relationships
begin a lifecycle of their own, during which the
characteristics of the relationship might change through interaction
or the lack thereof. An event is classified as perdu-rant
in the DOLCE ontology [6], which is a popular ontology.
For example, an event might be a meeting, a conference, a
baseball game, a walk, etc. Assume that person
A and person
B participate in Event X. In that situation, we note
that
A and B share an event co-participation relationship
under event
X.
A social relationship might have various social roles asso-ciated
with it. For example, a student-professor relationship
within a university setting includes an individual playing the
role of a professor; another individual plays the role of a student
. If
A and B take the same role to Event X, they are in
a same role relationship under event
X (e.g., students at a
class, colleagues in a workspace). If
A cannot take over B's
role or vice versa,
A and B are in a role-sharing relationship
(e.g., a professor and students, a project leader and staff).
Another kind of social relationship is called a common
property relationship. Sharing the same property value generates
a common property relationship between people. For
example, person
A and person B have a common working
place, common interests, and common experiences. Consequently
, they are in a common property relationship with
regard to those common properties.
2.2
Extraction of Social Networks
If two persons are in either an event co-participation relationship
or a common property relationship, they often
communicate. The communication media can be diverse:
82
Figure 3: Editor for social relationships
Figure 4: Editor for analyzing social networks and
assigning an access control list to content
face-to-face conversation, telephone call, email, chat, online
communication on Weblogs, and so on. If we wish to discover
the social relationship by observation, we must estimate
relationships from superficial communication. The
emerging field of social network mining provides methods
for discovering social interactions and networks from legacy
sources such as web pages, databases, mailing lists, and personal
emails.
Currently, we use three kinds of information sources to obtain
social relationships using mining techniques. From the
Web, we extract social networks using a search engine and
the co-occurrence of two persons' names on the Web. Consequently
, we can determine the following relationships among
researchers: Coauthor, Same affiliation, Same project, Same
event (participants of the same conference, workshop, etc.)
[8]. Coauthor and Same event correspond to an event co-participation
relationship. Same affiliation and same project
correspond to a common property relationship. We are also
using other sources such as email and sensors (we are developing
a device that detects users within social spaces such
as parties and conferences) to obtain social relationships.
Necessarily, the quality of information obtained by mining
is expected to be inferior to that of manually authored profiles
. We can reuse those data if a user has already declared
his relationships in FOAF or profiles of social networking
services. Although users might find it difficult and demanding
to record social relations, it would be beneficial to ask
users to provide information to obtain social relationships.
In addition to the relationship type, another factor of the
social relationship is tie strength. Tie strength itself is a
complex construct of several characteristics of social relations
. It is definable as affective, frequency, trust, comple-mentarity
, etc. No consensus for defining and measuring
them exists, which means that people use different elicita-tion
methods when it comes to determining tie strength.
For example, Orkut, a huge social networking service, allows
description of the strength of friendship relations on a
five-point scale from "haven't met" to "best friend", whereas
other sites might choose other scales or terms.
In our system, we use trust as a parameter of tie strength.
Trust has several very specific definitions. In [4], Golbeck
describes trust as credibility or reliability in a human sense:
"how much credence should I give to what this person speaks
about" and "based on what my friends say, how much should
I trust this new person?" In the context of information sharing
, trust can be regarded as reliability regarding "how a
person will handle my information". Users can give trust
directly in a numerical value to a person in his relation.
Alternatively, trust is obtainable automatically as authori-tativeness
of each person using the social network [8].
The obtained social network data are integrated as extended
FOAF files and stored in database. Users can adjust
networks if needed (Fig. 3). The social relationship and its
tie strength become guiding principles when a user determines
an access control list to information.
2.3
Social Network Analysis for Information
Sharing
The system enables users to analyze their social networks
to provide awareness of the information dissemination process
within the social network.
Social network analysis (SNA) is distinguishable from other
fields of sociology by its focus on relationships between actors
rather than attributes of actors, a network view, and
a belief that structure affects substantive outcomes.
Because
an actor's position in a network affects information
dissemination, SNA provides an important implication for
information sharing on the social network. For example, occupying
a favored position means that the actor will have
better access to information, resources, and social support.
The SNA models are based on graphs, with graph measures
, such as centrality, that are defined using a sociological
interpretation of graph structure. Freeman proposes numerous
ways to measure centrality [2]. Considering a social network
of actors, the simplest measure is to count the number
of others with whom an actor maintains relations. The actor
with the most connections, the highest degree, is most
central. This measure is called degreeness. Another measure
is closeness, which calculates the distance from each
actor in the network to every other actor based on connections
among all network members. Central actors are closer
to all others than are other actors. A third measure is betweenness
, which examines the extent to which an actor is
situated among others in the network, the extent to which
83
Figure 5: Web site for sharing research information
information must pass through them to get to others, and
consequently, the extent to which they are exposed to information
circulation within the network. If the betweenness
of an actor is high, it frequently acts as a local bridge that
connects the individual to other actors outside a group. In
terms of network ties, this kind of bridge is well known as
Granovetter's "weak tie" [5], which contrasts with "strong
tie" within a densely-closed group.
As the weak tie becomes a bridge between different groups,
a large community often breaks up to a set of closely knit
group of individuals, woven together more loosely according
to occasional interaction among groups. Based on this
theory, social network analysis offers a number of clustering
algorithms for identifying communities based on network
data.
The system provides users with these network analyses
(Fig. 4) so that they can decide who can access their information
. For example, if user wants to diffuse her information
, she might consider granting access to a person (with
certain trust) who has both high degreeness and betweenness
. On the other hand, she must be aware of betweenness
when the information is private or confidential. Clustering
is useful when a user wishes to share information within a
certain group.
APPLICATION
To demonstrate and evaluate our system, we developed a
community site (Fig. 5) using communication tools such as
Weblogs, Wikis, and Forums. By that system, studies from
different organizations and projects can be disseminated and
their information thereby shared. Users can share various
information such as papers, ideas, reports, and schedules at
the site. Our system is integrated into a site that provides
access control to that information. Integrating our system
takes advantage of the open and information nature of the
communication tools. It also maintains the privacy of the
content and activities of those applications.
Users can manage their social networks (Fig. 3) and attach
the access control list to their content (e.g., Blog entries
, profiles, and Wiki pages) using extracted social relationships
and social network analysis (Fig. 4).
Once a user determines the access control list, she can save
it as her information access policy for corresponding content.
The access policy is described using extended eXtensible
Access Control Markup Language (XACML) and is stored
in a database. She can reuse and modify the previous policy
if she subsequently creates a similar content.
One feature of our system is that it is easily adaptable to
new applications because of its plug-and-play design. We
are planning to integrate it into various Web sites and applications
such as social network sites and RSS readers.
RELATED WORKS AND CONCLUSIONS
Goecks and Mynatt propose a Saori infrastructure that
also uses social networks for information sharing [3]. They
obtain social networks from users' email messages and provide
sharing policies based on the type of information. We
obtain social networks from various sources and integrate
them into FOAF files. This facilitates the importation and
maintenance of social network data. Another feature is that
our system enables users to analyze their social networks.
Thereby, users can control information dissemination more
effectively and flexibly than through the use of pre-defined
policies.
As users increasingly disseminate their information on the
Web, privacy concerns demand that access to particular information
be limited.
We propose a real-world oriented
information sharing system using social networks. It enables
users to control the information dissemination process
within social networks, just as they are in the real world.
Future studies will evaluate the system with regard to how
it contributes to wider and safer information sharing than it
would otherwise. We will also develop a distributed system
that can be used fully on the current Web.
REFERENCES
[1] D. Brickley and L. Miller. FOAF: the 'friend of a
friend' vocabulary. http://xmlns. com/foaf/0.1/, 2004.
[2] L. C. Freeman. Centrality in social networks:
Conceptual clarification, Social Networks, Vol.1,
pp.215239, 1979.
[3] J. Goecks and E. D. Mynatt. Leveraging Social
Networks for Information Sharing In Proc. of
CSCW'04, 2004.
[4] J. Golbeck, J. Hendler, and B. Parsia. Trust networks
on the semantic web, in Proc. WWW 2003, 2003.
[5] M. Granovetter. Strength of weak ties, American
Journal of Sociology, Vol.18, pp.13601380, 1973.
[6] C. Masolo, S. Borgo, A. Gangemi, N. Guarinno, and
A. Oltramari. WonderWeb Deliverable D18,
http://wonderweb.semanticweb.org/deliverable/D18.shtml
[7] Y. Matsuo, M. Hamasaki, J. Mori, H. Takeda and K.
Hasida. Ontological Consideration on Human
Relationship Vocabulary for FOAF. In Proc. of the 1st
Workshop on Friend of a Friend, Social Networking
and Semantic Web, 2004.
[8] Y. Matsuo, H. Tomobe, K. Hasida, M. Ishiz uka.
Finding Social Network for Trust Calculation. In
Proc. of 16th European Conference on Artificial
Intelligence, 2004.
84
| Social network;Information sharing |
167 | Remote Access to Large Spatial Databases | Enterprises in the public and private sectors have been making their large spatial data archives available over the Internet . However, interactive work with such large volumes of online spatial data is a challenging task. We propose two efficient approaches to remote access to large spatial data. First, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management. We enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently. Second, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer Offloading the INTernet). This is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently. In APPOINT, active clients of the client-server architecture act on the server's behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions. | INTRODUCTION
In recent years, enterprises in the public and private sectors
have provided access to large volumes of spatial data
over the Internet. Interactive work with such large volumes
of online spatial data is a challenging task. We have been developing
an interactive browser for accessing spatial online
databases: the SAND (Spatial and Non-spatial Data) Internet
Browser. Users of this browser can interactively and
visually manipulate spatial data remotely. Unfortunately,
interactive remote access to spatial data slows to a crawl
without proper data access mechanisms. We developed two
separate methods for improving the system performance, together
, form a dynamic network infrastructure that is highly
scalable and provides a satisfactory user experience for interactions
with large volumes of online spatial data.
The core functionality responsible for the actual database
operations is performed by the server-based SAND system.
SAND is a spatial database system developed at the University
of Maryland [12].
The client-side SAND Internet
Browser provides a graphical user interface to the facilities
of SAND over the Internet. Users specify queries by choosing
the desired selection conditions from a variety of menus
and dialog boxes.
SAND Internet Browser is Java-based, which makes it deployable
across many platforms. In addition, since Java has
often been installed on target computers beforehand, our
clients can be deployed on these systems with little or no
need for any additional software installation or customiza-tion
. The system can start being utilized immediately without
any prior setup which can be extremely beneficial in
time-sensitive usage scenarios such as emergencies.
There are two ways to deploy SAND. First, any standard
Web browser can be used to retrieve and run the client piece
(SAND Internet Browser) as a Java application or an applet.
This way, users across various platforms can continuously
access large spatial data on a remote location with little or
1
5
no need for any preceding software installation. The second
option is to use a stand-alone SAND Internet Browser along
with a locally-installed Internet-enabled database management
system (server piece). In this case, the SAND Internet
Browser can still be utilized to view data from remote locations
. However, frequently accessed data can be downloaded
to the local database on demand, and subsequently accessed
locally. Power users can also upload large volumes of spatial
data back to the remote server using this enhanced client.
We focused our efforts in two directions. We first aimed at
developing a client-server architecture with efficient caching
methods to balance local resources on one side and the significant
latency of the network connection on the other. The
low bandwidth of this connection is the primary concern in
both cases. The outcome of this research primarily addresses
the issues of our first type of usage (i.e., as a remote browser
application or an applet) for our browser and other similar
applications.
The second direction aims at helping users
that wish to manipulate large volumes of online data for
prolonged periods. We have developed a centralized peer-to
-peer approach to provide the users with the ability to
transfer large volumes of data (i.e., whole data sets to the
local database) more efficiently by better utilizing the distributed
network resources among active clients of a client-server
architecture. We call this architecture APPOINT -Approach
for Peer-to-Peer Offloading the INTernet. The
results of this research addresses primarily the issues of the
second type of usage for our SAND Internet Browser (i.e.,
as a stand-alone application).
The rest of this paper is organized as follows. Section 2 describes
our client-server approach in more detail. Section 3
focuses on APPOINT, our peer-to-peer approach. Section 4
discusses our work in relation to existing work. Section 5
outlines a sample SAND Internet Browser scenario for both
of our remote access approaches. Section 6 contains concluding
remarks as well as future research directions.
THE CLIENT-SERVER APPROACH
Traditionally, Geographic Information Systems (GIS)
such as ArcInfo from ESRI [2] and many spatial databases
are designed to be stand-alone products.
The spatial
database is kept on the same computer or local area network
from where it is visualized and queried. This architecture
allows for instantaneous transfer of large amounts of data
between the spatial database and the visualization module
so that it is perfectly reasonable to use large-bandwidth protocols
for communication between them. There are however
many applications where a more distributed approach is desirable
. In these cases, the database is maintained in one location
while users need to work with it from possibly distant
sites over the network (e.g., the Internet). These connections
can be far slower and less reliable than local area networks
and thus it is desirable to limit the data flow between the
database (server) and the visualization unit (client) in order
to get a timely response from the system.
Our client-server approach (Figure 1) allows the actual
database engine to be run in a central location maintained
by spatial database experts, while end users acquire a Java-based
client component that provides them with a gateway
into the SAND spatial database engine.
Our client is more than a simple image viewer. Instead, it
operates on vector data allowing the client to execute many
operations such as zooming or locational queries locally. In
Figure 1: SAND Internet Browser -- Client-Server
architecture.
essence, a simple spatial database engine is run on the client.
This database keeps a copy of a subset of the whole database
whose full version is maintained on the server. This is a
concept similar to `caching'. In our case, the client acts as
a lightweight server in that given data, it evaluates queries
and provides the visualization module with objects to be
displayed. It initiates communication with the server only
in cases where it does not have enough data stored locally.
Since the locally run database is only updated when additional
or newer data is needed, our architecture allows the
system to minimize the network traffic between the client
and the server when executing the most common user-side
operations such as zooming and panning. In fact, as long
as the user explores one region at a time (i.e., he or she is
not panning all over the database), no additional data needs
to be retrieved after the initial population of the client-side
database.
This makes the system much more responsive
than the Web mapping services. Due to the complexity of
evaluating arbitrary queries (i.e., more complex queries than
window queries that are needed for database visualization),
we do not perform user-specified queries on the client. All
user queries are still evaluated on the server side and the
results are downloaded onto the client for display. However,
assuming that the queries are selective enough (i.e., there are
far fewer elements returned from the query than the number
of elements in the database), the response delay is usually
within reasonable limits.
2.1
Client-Server Communication
As mentioned above, the SAND Internet Browser is a
client piece of the remotely accessible spatial database server
built around the SAND kernel. In order to communicate
with the server, whose application programming interface
(API) is a Tcl-based scripting language, a servlet specifically
designed to interface the SAND Internet Browser with the
SAND kernel is required on the server side. This servlet listens
on a given port of the server for incoming requests from
the client. It translates these requests into the SAND-Tcl
language. Next, it transmits these SAND-Tcl commands or
scripts to the SAND kernel. After results are provided by
the kernel, the servlet fetches and processes them, and then
sends those results back to the originating client.
Once the Java servlet is launched, it waits for a client to
initiate a connection. It handles both requests for the actual
client Java code (needed when the client is run as an applet)
and the SAND traffic. When the client piece is launched,
it connects back to the SAND servlet, the communication
is driven by the client piece; the server only responds to
the client's queries. The client initiates a transaction by
6
sending a query.
The Java servlet parses the query and
creates a corresponding SAND-Tcl expression or script in
the SAND kernel's native format.
It is then sent to the
kernel for evaluation or execution. The kernel's response
naturally depends on the query and can be a boolean value,
a number or a string representing a value (e.g., a default
color) or, a whole tuple (e.g., in response to a nearest tuple
query). If a script was sent to the kernel (e.g., requesting
all the tuples matching some criteria), then an arbitrary
amount of data can be returned by the SAND server. In this
case, the data is first compressed before it is sent over the
network to the client. The data stream gets decompressed
at the client before the results are parsed.
Notice, that if another spatial database was to be used
instead of the SAND kernel, then only a simple modification
to the servlet would need to be made in order for the
SAND Internet Browser to function properly. In particular
, the queries sent by the client would need to be recoded
into another query language which is native to this different
spatial database. The format of the protocol used for communication
between the servlet and the client is unaffected.
THE PEER-TO-PEER APPROACH
Many users may want to work on a complete spatial data
set for a prolonged period of time. In this case, making an
initial investment of downloading the whole data set may be
needed to guarantee a satisfactory session. Unfortunately,
spatial data tends to be large. A few download requests
to a large data set from a set of idle clients waiting to be
served can slow the server to a crawl. This is due to the fact
that the common client-server approach to transferring data
between the two ends of a connection assumes a designated
role for each one of the ends (i.e, some clients and a server).
We built APPOINT as a centralized peer-to-peer system
to demonstrate our approach for improving the common
client-server systems. A server still exists. There is a central
source for the data and a decision mechanism for the
service. The environment still functions as a client-server
environment under many circumstances. Yet, unlike many
common client-server environments, APPOINT maintains
more information about the clients. This includes, inventories
of what each client downloads, their availabilities, etc.
When the client-server service starts to perform poorly or
a request for a data item comes from a client with a poor
connection to the server, APPOINT can start appointing
appropriate active clients of the system to serve on behalf
of the server, i.e., clients who have already volunteered their
services and can take on the role of peers (hence, moving
from a client-server scheme to a peer-to-peer scheme). The
directory service for the active clients is still performed by
the server but the server no longer serves all of the requests.
In this scheme, clients are used mainly for the purpose of
sharing their networking resources rather than introducing
new content and hence they help offload the server and scale
up the service. The existence of a server is simpler in terms
of management of dynamic peers in comparison to pure peer-to
-peer approaches where a flood of messages to discover
who is still active in the system should be used by each peer
that needs to make a decision. The server is also the main
source of data and under regular circumstances it may not
forward the service.
Data is assumed to be formed of files. A single file forms
the atomic means of communication. APPOINT optimizes
requests with respect to these atomic requests. Frequently
accessed data sets are replicated as a byproduct of having
been requested by a large number of users. This opens up
the potential for bypassing the server in future downloads for
the data by other users as there are now many new points of
access to it. Bypassing the server is useful when the server's
bandwidth is limited.
Existence of a server assures that
unpopular data is also available at all times. The service
depends on the availability of the server. The server is now
more resilient to congestion as the service is more scalable.
Backups and other maintenance activities are already being
performed on the server and hence no extra administrative
effort is needed for the dynamic peers. If a peer goes
down, no extra precautions are taken. In fact, APPOINT
does not require any additional resources from an already
existing client-server environment but, instead, expands its
capability. The peers simply get on to or get off from a table
on the server.
Uploading data is achieved in a similar manner as downloading
data. For uploads, the active clients can again be
utilized. Users can upload their data to a set of peers other
than the server if the server is busy or resides in a distant
location. Eventually the data is propagated to the server.
All of the operations are performed in a transparent fashion
to the clients. Upon initial connection to the server,
they can be queried as to whether or not they want to share
their idle networking time and disk space. The rest of the
operations follow transparently after the initial contact. APPOINT
works on the application layer but not on lower layers
. This achieves platform independence and easy deploy-ment
of the system. APPOINT is not a replacement but
an addition to the current client-server architectures. We
developed a library of function calls that when placed in a
client-server architecture starts the service. We are developing
advanced peer selection schemes that incorporate the
location of active clients, bandwidth among active clients,
data-size to be transferred, load on active clients, and availability
of active clients to form a complete means of selecting
the best clients that can become efficient alternatives to the
server.
With APPOINT we are defining a very simple API that
could be used within an existing client-server system easily.
Instead of denial of service or a slow connection, this API
can be utilized to forward the service appropriately. The
API for the server side is:
start(serverPortNo)
makeFileAvailable(file,location,boolean)
callback receivedFile(file,location)
callback errorReceivingFile(file,location,error)
stop()
Similarly the API for the client side is:
start(clientPortNo,serverPortNo,serverAddress)
makeFileAvailable(file,location,boolean)
receiveFile(file,location)
sendFile(file,location)
stop()
The server, after starting the APPOINT service, can make
all of the data files available to the clients by using the
makeFileAvailable method.
This will enable APPOINT
to treat the server as one of the peers.
The two callback methods of the server are invoked when
a file is received from a client, or when an error is encountered
while receiving a file from a client. APPOINT guar-7
Figure 2: The localization operation in APPOINT.
antees that at least one of the callbacks will be called so
that the user (who may not be online anymore) can always
be notified (i.e., via email).
Clients localizing large data
files can make these files available to the public by using the
makeFileAvailable method on the client side.
For example, in our SAND Internet Browser, we have the
localization of spatial data as a function that can be chosen
from our menus. This functionality enables users to download
data sets completely to their local disks before starting
their queries or analysis. In our implementation, we have
calls to the APPOINT service both on the client and the
server sides as mentioned above. Hence, when a localization
request comes to the SAND Internet Browser, the browser
leaves the decisions to optimally find and localize a data set
to the APPOINT service. Our server also makes its data
files available over APPOINT. The mechanism for the localization
operation is shown with more details from the
APPOINT protocols in Figure 2. The upload operation is
performed in a similar fashion.
RELATED WORK
There has been a substantial amount of research on remote
access to spatial data.
One specific approach has
been adopted by numerous Web-based mapping services
(MapQuest [5], MapsOnUs [6], etc.). The goal in this approach
is to enable remote users, typically only equipped
with standard Web browsers, to access the company's spatial
database server and retrieve information in the form of
pictorial maps from them. The solution presented by most
of these vendors is based on performing all the calculations
on the server side and transferring only bitmaps that represent
results of user queries and commands. Although the
advantage of this solution is the minimization of both hardware
and software resources on the client site, the resulting
product has severe limitations in terms of available functionality
and response time (each user action results in a new
bitmap being transferred to the client).
Work described in [9] examines a client-server architecture
for viewing large images that operates over a low-bandwidth
network connection.
It presents a technique
based on wavelet transformations that allows the minimization
of the amount of data needed to be transferred over
the network between the server and the client. In this case,
while the server holds the full representation of the large image
, only a limited amount of data needs to be transferred
to the client to enable it to display a currently requested
view into the image. On the client side, the image is reconstructed
into a pyramid representation to speed up zooming
and panning operations. Both the client and the server keep
a common mask that indicates what parts of the image are
available on the client and what needs to be requested. This
also allows dropping unnecessary parts of the image from the
main memory on the server.
Other related work has been reported in [16] where a
client-server architecture is described that is designed to provide
end users with access to a server. It is assumed that
this data server manages vast databases that are impractical
to be stored on individual clients. This work blends raster
data management (stored in pyramids [22]) with vector data
stored in quadtrees [19, 20].
For our peer-to-peer transfer approach (APPOINT), Nap-ster
is the forefather where a directory service is centralized
on a server and users exchange music files that they have
stored on their local disks. Our application domain, where
the data is already freely available to the public, forms a
prime candidate for such a peer-to-peer approach. Gnutella
is a pure (decentralized) peer-to-peer file exchange system.
Unfortunately, it suffers from scalability issues, i.e., floods of
messages between peers in order to map connectivity in the
system are required. Other systems followed these popular
systems, each addressing a different flavor of sharing over
the Internet. Many peer-to-peer storage systems have also
recently emerged. PAST [18], Eternity Service [7], CFS [10],
and OceanStore [15] are some peer-to-peer storage systems.
Some of these systems have focused on anonymity while others
have focused on persistence of storage. Also, other approaches
, like SETI@Home [21], made other resources, such
as idle CPUs, work together over the Internet to solve large
scale computational problems. Our goal is different than
these approaches. With APPOINT, we want to improve existing
client-server systems in terms of performance by using
idle networking resources among active clients. Hence, other
issues like anonymity, decentralization, and persistence of
storage were less important in our decisions. Confirming
the authenticity of the indirectly delivered data sets is not
yet addressed with APPOINT. We want to expand our research
, in the future, to address this issue.
From our perspective, although APPOINT employs some
of the techniques used in peer-to-peer systems, it is also
closely related to current Web caching architectures. Squirrel
[13] forms the middle ground. It creates a pure peer-to-peer
collaborative Web cache among the Web browser caches
of the machines in a local-area network. Except for this recent
peer-to-peer approach, Web caching is mostly a well-studied
topic in the realm of server/proxy level caching [8,
11, 14, 17]. Collaborative Web caching systems, the most
relevant of these for our research, focus on creating either
a hierarchical, hash-based, central directory-based, or
multicast-based caching schemes. We do not compete with
these approaches.
In fact, APPOINT can work in tandem
with collaborative Web caching if they are deployed
together. We try to address the situation where a request
arrives at a server, meaning all the caches report a miss.
Hence, the point where the server is reached can be used to
take a central decision but then the actual service request
can be forwarded to a set of active clients, i.e., the down-8
load and upload operations.
Cache misses are especially
common in the type of large data-based services on which
we are working. Most of the Web caching schemes that are
in use today employ a replacement policy that gives a priority
to replacing the largest sized items over smaller-sized
ones. Hence, these policies would lead to the immediate replacement
of our relatively large data files even though they
may be used frequently. In addition, in our case, the user
community that accesses a certain data file may also be very
dispersed from a network point of view and thus cannot take
advantage of any of the caching schemes. Finally, none of
the Web caching methods address the symmetric issue of
large data uploads.
A SAMPLE APPLICATION
FedStats [1] is an online source that enables ordinary citizens
access to official statistics of numerous federal agencies
without knowing in advance which agency produced them.
We are using a FedStats data set as a testbed for our work.
Our goal is to provide more power to the users of FedStats
by utilizing the SAND Internet Browser. As an example,
we looked at two data files corresponding to Environmen-tal
Protection Agency (EPA)-regulated facilities that have
chlorine and arsenic, respectively. For each file, we had the
following information available: EPA-ID, name, street, city,
state, zip code, latitude, longitude, followed by flags to indicate
if that facility is in the following EPA programs: Hazardous
Waste, Wastewater Discharge, Air Emissions, Abandoned
Toxic Waste Dump, and Active Toxic Release.
We put this data into a SAND relation where the spatial
attribute `location' corresponds to the latitude and longitude
. Some queries that can be handled with our system on
this data include:
1. Find all EPA-regulated facilities that have arsenic and
participate in the Air Emissions program, and:
(a) Lie in Georgia to Illinois, alphabetically.
(b) Lie within Arkansas or 30 miles within its border.
(c) Lie within 30 miles of the border of Arkansas (i.e.,
both sides of the border).
2. For each EPA-regulated facility that has arsenic, find
all EPA-regulated facilities that have chlorine and:
(a) That are closer to it than to any other EPA-regulated
facility that has arsenic.
(b) That participate in the Air Emissions program
and are closer to it than to any other EPA-regulated
facility which has arsenic. In order to
avoid reporting a particular facility more than
once, we use our `group by EPA-ID' mechanism.
Figure 3 illustrates the output of an example query that
finds all arsenic sites within a given distance of the border of
Arkansas. The sites are obtained in an incremental manner
with respect to a given point. This ordering is shown by
using different color shades.
With this example data, it is possible to work with the
SAND Internet Browser online as an applet (connecting to
a remote server) or after localizing the data and then opening
it locally. In the first case, for each action taken, the
client-server architecture will decide what to ask for from
the server. In the latter case, the browser will use the peer-to
-peer APPOINT architecture for first localizing the data.
CONCLUDING REMARKS
An overview of our efforts in providing remote access to
large spatial data has been given. We have outlined our
approaches and introduced their individual elements. Our
client-server approach improves the system performance by
using efficient caching methods when a remote server is accessed
from thin-clients. APPOINT forms an alternative approach
that improves performance under an existing client-server
system by using idle client resources when individual
users want work on a data set for longer periods of time
using their client computers.
For the future, we envision development of new efficient algorithms
that will support large online data transfers within
our peer-to-peer approach using multiple peers simultane-ously
. We assume that a peer (client) can become unavail-able
at any anytime and hence provisions need to be in place
to handle such a situation. To address this, we will augment
our methods to include efficient dynamic updates. Upon
completion of this step of our work, we also plan to run
comprehensive performance studies on our methods.
Another issue is how to access data from different sources
in different formats. In order to access multiple data sources
in real time, it is desirable to look for a mechanism that
would support data exchange by design.
The XML protocol
[3] has emerged to become virtually a standard for
describing and communicating arbitrary data. GML [4] is
an XML variant that is becoming increasingly popular for
exchange of geographical data. We are currently working
on making SAND XML-compatible so that the user can instantly
retrieve spatial data provided by various agencies in
the GML format via their Web services and then explore,
query, or process this data further within the SAND framework
. This will turn the SAND system into a universal tool
for accessing any spatial data set as it will be deployable on
most platforms, work efficiently given large amounts of data,
be able to tap any GML-enabled data source, and provide
an easy to use graphical user interface. This will also convert
the SAND system from a research-oriented prototype
into a product that could be used by end users for accessing
, viewing, and analyzing their data efficiently and with
minimum effort.
REFERENCES
[1] Fedstats: The gateway to statistics from over 100 U.S.
federal agencies. http://www.fedstats.gov/, 2001.
[2] Arcinfo: Scalable system of software for geographic
data creation, management, integration, analysis, and
dissemination. http://www.esri.com/software/
arcgis/arcinfo/index.html, 2002.
[3] Extensible markup language (xml).
http://www.w3.org/XML/, 2002.
[4] Geography markup language (gml) 2.0.
http://opengis.net/gml/01-029/GML2.html, 2002.
[5] Mapquest: Consumer-focused interactive mapping site
on the web. http://www.mapquest.com, 2002.
[6] Mapsonus: Suite of online geographic services.
http://www.mapsonus.com, 2002.
[7] R. Anderson. The Eternity Service. In Proceedings of
the PRAGOCRYPT'96, pages 242252, Prague, Czech
Republic, September 1996.
[8] L. Breslau, P. Cao, L. Fan, G. Phillips, and
S. Shenker. Web caching and Zipf-like distributions:
9
Figure 3: Sample output from the SAND Internet Browser -- Large dark dots indicate the result of a query
that looks for all arsenic sites within a given distance from Arkansas. Different color shades are used to
indicate ranking order by the distance from a given point.
Evidence and implications. In Proceedings of the IEEE
Infocom'99, pages 126134, New York, NY, March
1999.
[9] E. Chang, C. Yap, and T. Yen. Realtime visualization
of large images over a thinwire. In R. Yagel and
H. Hagen, editors, Proceedings IEEE Visualization'97
(Late Breaking Hot Topics), pages 4548, Phoenix,
AZ, October 1997.
[10] F. Dabek, M. F. Kaashoek, D. Karger, R. Morris, and
I. Stoica. Wide-area cooperative storage with CFS. In
Proceedings of the ACM SOSP'01, pages 202215,
Banff, AL, October 2001.
[11] A. Dingle and T. Partl. Web cache coherence.
Computer Networks and ISDN Systems,
28(7-11):907920, May 1996.
[12] C. Esperanca and H. Samet. Experience with
SAND/Tcl: a scripting tool for spatial databases.
Journal of Visual Languages and Computing,
13(2):229255, April 2002.
[13] S. Iyer, A. Rowstron, and P. Druschel. Squirrel: A
decentralized peer-to-peer Web cache. Rice
University/Microsoft Research, submitted for
publication, 2002.
[14] D. Karger, A. Sherman, A. Berkheimer, B. Bogstad,
R. Dhanidina, K. Iwamoto, B. Kim, L. Matkins, and
Y. Yerushalmi. Web caching with consistent hashing.
Computer Networks, 31(11-16):12031213, May 1999.
[15] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski,
P. Eaton, D. Geels, R. Gummadi, S. Rhea,
H. Weatherspoon, W. Weimer, C. Wells, and B. Zhao.
OceanStore: An architecture for global-scale persistent
store. In Proceedings of the ACM ASPLOS'00, pages
190201, Cambridge, MA, November 2000.
[16] M. Potmesil. Maps alive: viewing geospatial
information on the WWW. Computer Networks and
ISDN Systems, 29(813):13271342, September 1997.
Also Hyper Proceedings of the 6th International World
Wide Web Conference, Santa Clara, CA, April 1997.
[17] M. Rabinovich, J. Chase, and S. Gadde. Not all hits
are created equal: Cooperative proxy caching over a
wide-area network. Computer Networks and ISDN
Systems, 30(22-23):22532259, November 1998.
[18] A. Rowstron and P. Druschel. Storage management
and caching in PAST, a large-scale, persistent
peer-to-peer storage utility. In Proceedings of the ACM
SOSP'01, pages 160173, Banff, AL, October 2001.
[19] H. Samet. Applications of Spatial Data Structures:
Computer Graphics, Image Processing, and GIS.
Addison-Wesley, Reading, MA, 1990.
[20] H. Samet. The Design and Analysis of Spatial Data
Structures. Addison-Wesley, Reading, MA, 1990.
[21] SETI@Home. http://setiathome.ssl.berkeley.edu/,
2001.
[22] L. J. Williams. Pyramidal parametrics. Computer
Graphics, 17(3):111, July 1983. Also Proceedings of
the SIGGRAPH'83 Conference, Detroit, July 1983.
10
| GIS;Client/server;Peer-to-peer;Internet |
168 | ResearchExplorer: Gaining Insights through Exploration in Multimedia Scientific Data | An increasing amount of heterogeneous information about scientific research is becoming available on-line. This potentially allows users to explore the information from multiple perspectives and derive insights and not just raw data about a topic of interest. However, most current scientific information search systems lag behind this trend; being text-based, they are fundamentally incapable of dealing with multimedia data. An even more important limitation is that their information environments are information-centric and therefore are not suitable if insights are desired. Towards this goal, in this paper, we describe the design of a system, called ResearchExplorer, which facilitates exploring multimedia scientific data to gain insights. This is accomplished by providing an interaction environment for insights where users can explore multimedia scientific information sources. The multimedia information is united around the notion of research event and can be accessed in a unified way. Experiments are conducted to show how ResearchExplorer works and how it cardinally differs from other search systems. | INTRODUCTION
Current web search engines and bibliography systems are
information-centric. Before searching for information, users need
to construct a query typically, by using some keywords to
represent the information they want. After the query is issued, the
system retrieves all information relevant to the query. The results
from such queries are usually presented to users by listing all
relevant hits. Thus, with these information-centric systems, users
can find information such as a person's homepage, a paper, a
research project's web page, and so on. However, when users
want to know the following types of things, they are unable to
find answers easily with current search systems:
1) Evolution of a field
2) People working in the field
3) A person's contribution to the field
4) Classical papers (or readings) in the field
5) Conferences/journals in the field
6) How the research of a person or an organization (group,
dept, university, etc) has evolved.
The reasons why current information-centric search systems have
difficulty to help users to find answers to questions above are due
to the limitations of their information environments.
First, some issues result from their data modeling. For example, to
answer the question of "evolution of a field", the most important
information components, which are time and location, need to be
captured and appropriately presented or utilized. However, in
typical bibliography systems such information is rigidly utilized
(if at all available) in the time-stamping sense.
Second, many important issues arise due to the presentation
methods utilized by such systems. For example, even though users
can find all papers of a person with some systems, it is not easy
for users to observe the trend if the results are just listed
sequentially. As an alternative, presenting results in a visual form
can make trend easier to identify.
Third, some of the questions listed above can not be answered
directly by the system because the answers depend on individual
person. For example, different users will have different judgments
on a researcher's contribution to a field. To form their own
thoughts, users may need to investigate and compare several
factors many times. In this case, it is too tedious if each query is a
new query. Thus, it is necessary that the system can maintain
query and user states and allow users to refine queries
dynamically. In other words, the user can not only query but also
explore information.
For this study, we propose a bibliography system with novel
interaction environment that aids not just in syntactic query
retrieval but also aids in developing insights. The goal of this
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
MIR'04, October 1516, 2004, New York, New York, USA.
Copyright 2004 ACM 1-58113-940-3/04/0010...$5.00.
7
system is to provide users an interaction environment where
information is modeled, accessed, and presented in such a way
that users can gain insights easily through exploration.
Specifically, in the interaction environment, scientific information
is modeled around the notion of a research event, which brings
together all semantically related information regardless of the
media (text, image, or video), through which it is expressed. Thus,
when users explore the information space, they can view
research in multiple media formats. Further, the interaction
environment presents information using multidimensional views,
which include temporal and spatial views. At the same time, the
interaction environment shows information of other attributes of
research, like category and people information.
In summary, the contribution of this work is to propose a novel
interaction environment for insights. Although the system is
focused on scientific information, we believe the techniques
developed in this work are applicable to other applications and
can work as a framework guiding design of interaction
environments for insights. The paper is structured as follows. We
begin with an introduction of interaction environment for insights.
Section 3 describes the system architecture. Section 4 explains
data modeling of the interaction environment. Section 5 presents
how the interaction environment is implemented. Section 6
discusses experiments and results. Section 7 gives a review of
related work. Section 8 concludes.
INTERACTION ENVIRONMENT FOR INSIGHTS
Our goal in designing the system is to provide an interaction
environment for users to explore multimedia scientific data to
gain insights into research. Insight is commonly understood as
follows.
Insight: the clear (and often sudden) understanding of a complex
situation [21].
From the definition, we can see insight is different from
information. If insight is gained, people should be able to
understand the inner nature of things. To illustrate their
difference, we refer the reader to Figure 1. In the figure, left part
shows two columns of numbers. What these numbers convey to
people is just information. It is very difficult for people to
understand the relationship between numbers in these two
columns by looking at numbers only. But if we show these
numbers by a chart as in the right hand, people can easily tell and
understand that the two columns have linear relationship. That is
the insight. In this case, people gain insight by understanding
relationship, which is visualized by a certain technique.
In the context of research, insights should include clear
understanding of different situations. Examples of these situations
are a research field, a person, an organization, and a specific
research event which will be defined later.
2.2 Key Characteristics of Interaction
Environment for Insights
An interaction environment for insights is an environment that
helps users to gain insights through exploration. It consists of a
database to store data, and user interface to explore data. Such an
environment has the following key characteristics.
1) Database to store information. As described in section 1,
spatio-temporal characteristics of information are critical to
present the evolution of a situation. Clear understanding a
situation often requires understanding how the situation
evolves. Therefore, spatio-temporal aspects of information
should be captured in data modeling. In addition, the data
modeling should be able to unify multimedia information.
Multimedia enables users to observe things using different
senses. Some media can help people to understand quickly.
Figure 2 shows such an example. By looking at the text at the
top, people may not be able to understand what the paper
"Content Based Image Synthesis" talks about. But with the
help of the images below, people can get an idea quickly
what the paper is about. Further, multimedia provides users
opportunity to view things from different perspectives. This
is especially important when clear understanding a situation
requires examine the situation from multiple angles.
2) User interface. As people gain insights by exploration, they
may need to check into a situation repeatedly and from
different viewpoints. Thus, interaction between a user and
the environment becomes very important. The design of user
interface should take this into consideration. We believe the
key features of UI are as follows.
a. The UI should support exploration of the spatial and
temporal characteristics of information.
b. The UI should support direct interactions between the
users and the information. This requires the UI to have
two characteristics: First, the UI should have the same
query and presentation space. In other words, a window
in the UI can not only be used to show information but
also be used to specify queries. For example, time and
location windows can show temporal and spatial
information. At the same time, users can issue temporal
and spatial queries in time and location windows. To
Figure 1. Information vs. insight
51
255
153
459
408
204
102
306
357
1
5
3
9
8
4
2
6
7
0
50
100
150
200
250
300
350
400
450
500
1
2
3
4
5
6
7
8
9
8
specify a query, the operation should be simple and
direct. The other characteristic is the reflective nature of
the UI. This means once information in a window is
changed, all other windows will be updated
automatically. This helps users to interact with the
environment directly and effectively.
c. The UI enables users to issue dynamic query. In some
current interaction environments, users are constrained
in forming queries. For example, users can only
generate temporal query with one time interval. In an
interaction environment for insights, users should be
able to form a query with multiple choices. This
provides users more flexibility to look into a situation of
interest.
d. The UI maintains the query state. It should know which
query whose results are used in the search condition of
another query and which query is based on another
query's results. This helps users not only to be aware to
contexts but also to form complex queries.
e. The UI should have zoom-in/zoom-out functionality
that allows examining the information at different
resolutions. When large volume of data is retrieved,
there is readability issue. To address this issue, zoom-in/zoom
-out functionality is needed.
f. Different visualization techniques need to be used. As
shown in figure 1, visualization techniques help users to
understand relationships and gain insights. However,
different relationships need different visualization
techniques. For example, social relationships are of
network structure. Temporal relationships are two
dimensional. To visualize these two types of
relationships effectively, it requires different techniques.
These characteristics will guide the interaction environment
design of the system we are discussing.
SYSTEM ARCHITECTURE
Figure 3 shows the high level architecture of ResearchExplorer.
There are three main components: Event Collector, Event
Database and Interaction Environment. One of the functions of
Event Collector is to gather data from different sources. Then it
parses and assimilates information around the notion of research
event. Finally, it sends these data to Event Database. Event
Database is a database of events. It stores all information around
events. ResearchExplorer uses a natural XML database for Event
Database. The reasons will be explained in next section. In this
database, all information about a research event is stored as an
XML file. The schema will be defined in section 4.2. Interaction
Environment consists of User Interface and Searcher. Through the
UI, users form a query. The query is then converted into XPath
format by the Searcher and sent to the Event Database. After the
results are retrieved from the Event Database, the Searcher gets
them back to the User Interface to present to users.
In this paper, our focus is not on how to collect data or unify
multimedia information by event. Interested readers can refer to
[5][6][14][15][18] for information gathering and [17] for
multimedia information assimilation. What we focus is on the
design of Interaction Environment based on research event.
EVENT DATABASE
As described above, the interaction environment for insights
needs data modeling which can capture temporal and spatial
characteristics, and unify multimedia. A recent work in [17] has
proposed a unified multimedia data model that is capable of
describing spatial-temporal characteristics of information. This
model is built on the notion of events. Its efficacy has been
demonstrated in different domains including modeling of
multimedia information related to meetings [17] and personal
Research Events
Event
Collector
Searcher
User Interface
Data
Event
Database
Figure 3. ResearchExplorer architecture.
query
Interaction
Environment
Figure 2. Multimedia helps understanding.
Paper Title: Content Based Image Synthesis
9
multimedia information management [16]. We base our current
research on these ideas and extend them further to the specific
domain of scientific information management. In [17], an event is
defined as follows.
Event: An event is an observed physical reality parameterized by
space and the time. The observations describing the event are
defined by the nature or physics of the observable, the
observation model, and the observer.
This definition was given to events in general. In order to be
concrete in research domain, a specific event definition to
research is necessary. Therefore, based on their definition, we are
defining an event in research domain, which is called research
event, as follows.
Research event: A research event is a set of semantically
correlated events within research domain, parameterized by time,
location, participant, and content.
Note that semantics is contextual. It depends on many factors like
time, location, people etc. Thus, a research event is flexible. For
instance, it can be a research paper, a thesis, a technical report, a
book, a patent, a presentation, an image, a video, or a project
combining part or all of aforementioned. Semantics also depends
on domain level. It is generated differently at different domain
levels even though from the same event. That is because different
aspects of the event are emphasized at different domain levels.
Thus, a research event could be part of another one. For example,
someone is talking is an event by itself. At the same time, it is part
of a seminar event as well.
The definition of a research event provides us with the central
characteristics to meet the requirements of our application. By the
definition, a research event is parameterized by time and location.
It can capture the dynamics of itself. Thus, users can easily
observe how events evolve, which is helpful to insight generation.
Relationships between events can be shown in terms of attributes
of an event. This will enable users to observe events in a big
context and get deeper understanding. Further, all multimedia data
is unified around the notion of a research event. Thus, a research
event becomes an access point to multimedia data.
4.2 Semi-Structured Data
Multimedia data about scientific research does not follow a rigid
structure. For example, research papers have reference while
images do not have. Even for references, the number of citations
varied over different papers. At the same time, these data do have
some common information components such as time and location
information. This semi-structured characteristic makes methods
like relational database for storing structured data unsuitable.
Techniques for storing semi-structured data are appropriate
instead.
XML is one of the solutions to model semi-structured data. It has
become very popular for introducing semantics in text. And it has
rapidly replaced automatic approaches to deduce semantics from
the data in text files. This approach to explicitly introduce tags to
help processes compute semantics has been very successful so far
[13]. Based on this, we choose XML to store research event
information. Figure 4 shows the schema of XML files for research
events.
4.3 Description of the Data Model
Based on the definition of research event, four fundamental
information components are needed to describe a research event.
These components are: when the research event happens, where
the research event occurs, who participates in the research event,
and what the research event is about. Thus, a data model as
follows is proposed to represent a research event. As shown in
Figure 5, a research event is characterized by the following
attributes: Name, Time, Participant, Category, Mediasource,
Subevents, and Free Attributes. Here Name refers to the name of a
research event, Time refers to the times when the research is done,
Participant refers to people who do the research and their
affiliations, Category refers to the ACM Classification of the
research, which can belong to several categories, Mediasource
contains media type and source (URL) of the media covering the
research event, Subevent refers to part of the research event and
has the same structure of a research event, Free Attributes are
used to capture media specific characteristics when needed, for
example, it refers to reference for a paper.
As described above, the data model encapsulates all information
components of a research event by one or more attributes. When
component is captured by Time, where is captured by Participant
Affiliation, who is captured by Participant, and what is captured
by Name and Categories. Multimedia supporting the research
event is brought in by Mediasource attribute.
4.4 XML Database
In ResearchExplorer, Berkeley DB XML [3] is chosen for Event
Database. Berkeley DB XML is an application-specific native
XML data manager. It is supplied as a library that links directly
into the application's address space. Berkeley DB XML provides
storage and retrieval for native XML data and semi-structured
data. So it can meet the requirements of Event Database.
Figure 4. Schema of research event.
10
USER INTERFACE
In ResearchExplorer, a unified presentation-browsing-querying
interface is used. Research events are shown in multidimensional
views. As multimedia data is organized around research event, the
data is presented by fundamental components of research event,
i.e., When, Where, Who, and What. Figure 7(a) shows a
screenshot of the user interface we developed. There are totally
five windows plus a text box. In the upper are timeline and map
windows showing time and location information of research
events. In the lower right, there are two windows showing people
and category information. The window in the lower left is
different from those windows aforementioned. It is used to show
multimedia data of research events. Once a research event is
selected, multimedia data like papers, images, and videos are
presented in this window and they are presented according to the
event-subevent structure. Clicking on a specific media instance
label will lead users to the original source of the media and trigger
appropriate application for that particular kind of media. So users
can view original media as they want. The text box is designed for
keyword-based searching. It enables users to search information
in traditional way.
5.2 Research Representation
Time and location are the primary parameters based on which
dynamics is captured. Therefore, they are depicted as the primary
exploration dimensions. The way to represent research events in
these windows is critical. In ResearchExplorer, two different
representation methods are used. We borrowed idea of
representing research events from [16] in the timeline window
where research events are represented by rectangles. A rectangle
spans the duration of a research event. Within each rectangle,
there may be smaller rectangles. These smaller ones represent
subevents of the research event. All rectangles for one research
event are nested according to the event-subevent structure. The
media, presenting the research event, are represented by icons in
the rectangles. Icons are chosen intuitively for users to recognize
easily. They are specific to each media. Icons belong to the same
research event are grouped together in chronological order. The
fidelity of such a representation is maintained during temporal
zoom-in/zoom-out operations as described later. The recursive
nature of the representation is used to capture aggregate
relationships where a research event may comprise of other
events. The primary purpose of such a representation is to provide
users with a structural and temporal view of research events. In
the map window, research events are represented by "dot maps"
[19]. Each dot in the map shows a research event at the location
of the dot. By means of dot maps, the precision of location
information is high, and the variable density of dots conveys
information about the amount of research events at a location.
5.3 What-You-See-Is-What-You-Get
(WYSIWYG)
In the system, WYSIWYG search is employed the query and
presentation spaces are the same. As described above, windows
serve as a way to display information and relationships of research
events. These windows, except details window, serve another
function in specifying queries as well. Contrary to many search
interfaces where users specify several properties and then press a
button to issue a query, users can issue a query by a simple
operation in this user interface. For example, users can launch a
query by specifying a time interval, a location region, a person's
name, or a research category. Figure 6 shows examples of these
methods. In ResearchExplorer, exploration is based on sessions.
Each session consists of one or more queries. A query is either a
new session query or a refine query. A new session query is the
first query of each session. All other queries in a session are refine
queries. For new session query, the system retrieves results from
the database. If it is a refine query, the query will not be sent to
the database. It will be executed based on the results set of the
new session query of that session. With this method, users can
choose a broad set of results first, and then observe any subset of
the results of interest. This is very important because knowledge is
accumulated as users manipulate the results by choosing different
perspectives. Once a refine query is posed, results of the query
will be highlighted in all windows.
Figure 5. Graphical representation of a research event.
RE: Research Event
N: Name
T: Time
PS: Participants
P: Participant
PN: Participant's Name
FN: First Name
MS: Media Source
MT: Media Type
S: Source
SS: Subevents
SE: Subevent
FA: Free Attribute
N
T
RE
C
SS
PS
AN LA LO
FN
PN
PA
LN
P
CS FA
...
...
MT
S
LN: Last Name
PA: Participant Affiliation
AN: Affiliation Name
LA: Latitude
LO: Longitude
CS: Categories
C: Category
SE
...
MS
Figure 6. Different query methods. (a) shows a query by
time. (b) shows a query by a person. (c) shows query by
specifying a region of locations. (d) shows query by
category.
11
5.4 Reflective UI
In designing the user interface, multiple window coordination
strategy is used. By means of this strategy, components of the user
interface are coupled tightly. The windows respond to user
activity in a unified manner such that user interaction in one
window is reflected instantly in other windows. For example,
when the user selects a research event in timeline window, this
research event will be highlighted in the map and other windows.
This cooperative visualization is effective in information
exploration as it maintains the context as the user interacts with
the data. Figure 7(a) shows an example where a research event is
selected and its information in other windows is highlighted.
5.5 Interactions with Time and Location
Information
In ResearchExplorer, both timeline and map provide zoom-in/zoom
-out function. This makes it possible for users to look at
how a research event evolves in details. The timeline has year as
the highest level of temporal representation. So it's likely that two
subevents of a research event are overlapped when they are shown
in year level. With zoom-in functionality, users can zoom into
finer level to see the temporal relationship between these two
subevents. The location window in ResearchExplorer has been
implemented using an open source JavaBeansTM based package
called OpenMapTM [2]. Research events are presented as dots on
the map. Due to the size limitation, it is hard for users to
differentiate events when they are close to each other. Similarly as
in timeline, users can zoom into that area and see the events at
finer resolution. Further, panning of the entire map is also
supported.
EXPERIMENTS
In this section, we conduct some experiments as case studies.
These case studies will show how users can look into details of
research events and observe relationships between research
events.
6.1 Experiment I: Exploring Information
Exploring information with context is one of important features of
ResearchExplorer. With this function, users can refine retrieved
results to check into different aspects of a situation. In this
experiment, we are interested in how users can refine results as
they explore information. Assume following information is of
interest:
Show all research events on AI during the time period from
1989 through 2004?
Out of the results above, show the part done in CA
For each person, show all research events he/she participated
in.
To find answers to the first query, users can select the time
interval from 1989 to 2004 and then choose AI category only.
Figure 7(a) shows all results. Note that the results consist of all
research events in AI, but they do not include all in the world. As
shown in the figure, timeline window shows the temporal
information and temporal relationships between research events,
map window shows the distribution of location, category window
shows what all categories these research events belong to,
participant window shows all people being involved in these
research events. The details window shows all multimedia data
about research events. In this window, a research event named
"UMN MegaScout" is placed at the top after the rectangle
representing this event is clicked in the timeline window. If we
click the image thumbnails, the original images will be shown.
Figures 7(b) shows the three images and four videos. When we
look at these images and video frames, we can have a better
understanding about this research. In other words, direct
observation of the multimedia data of a research event helps users
to gain insights about the research. In order to answer the second
query, we specify a region on the map which encloses all dots on
CA. The research events are then highlighted in timeline windows
as shown in Figure 7(c). To check research events a person
participated in, we only need to move the mouse cursor to be over
the person's name. Similarly, all research events he/she
participated in will be highlighted.
6.2 Experiment II: Comparisons with Other
Systems
In this section, we compare ResearchExplorer with other systems
in terms of functionalities. Without loss of generality, we choose
Google [9], CiteSeer [7], and ACM Digital Library [1] as
examples. First, we compare the presentation methods. Figure 8
shows the screen shots of these systems after a query of "artificial
intelligence" is issued. Compared with ResearchExplorer, these
systems are unable to show how AI research evolves. Users thus
can not get a whole picture of AI research, which otherwise is
important to understand this area and conduct research in AI.
Another comparison is done on query and explore functions. The
results are shown in Table 1.
RELATED WORK
There are many systems which can search for scientific
information. These systems can be classified into two categories.
The first are bibliographical systems developed especially for
scientific information searching. ACM Digital Library, IEEE
Xplore [10], and INSPEC [11] are good examples of this class.
These systems store information about publications, which are
from some pre-selected sources, in their repositories. They
organize data by using structural information of publications like
title, author, etc. CiteSeer is another well-known system of this
kind. Compared with the aforementioned, it collects publications
from the web and performs citation indexing in addition to full-text
indexing [8]. Another class is web search engine for general
information. The most well-known of this type is Google. Systems
of this class index the text contained in documents, allowing users
to find information using keyword search. Our work differs
significantly from these systems. These systems are designed to
concentrating in information providing. Our work is focused on
providing an interaction environment for insights into research.
Other related work comes from research on multimedia
experience. Boll and Westermann [4] presented the Medither
multimedia event space, a decentralized peer-to-peer
infrastructure that allows to publish, to find and to be notified
about multimedia events of interest. Our focus was not to create a
multimedia event space but rather to develop an interaction
environment for users to experience multimedia events. The
Informedia group at CMU has also worked on multimedia
12
experience [20]. However there are important differences here
their main goal was to capture and integrate personal multimedia
Image1 Image3
Image2
Video4
Video1
Video2
Video3
(a)
(c)
(b)
Figure 7. ResearchExplorer UI. (a) shows screen shot of the UI. (b) shows the images and videos of a research event. (c) shows the
highlighted results when a spatial refinement is made.
Figure 8. Screen shots of other search systems.
13
experiences not create an environment for experiencing
multimedia personally.
In [12], Jain envisioned the essence of an experiential
environment. The main goal of experiential environment is for
insights. Following this work, there are some other work on
experiential environment [13][16]. However, in these work, there
is little discussion on experiential environments. Our work
developed further some ideas in [12] and concretized the design
framework of an interaction environment for insights.
CONCLUSION
We have described a novel system which helps users to gain
insights through exploring multimedia scientific data. Although
framework for designing an interaction environment for insights is
identified, the implementation is a first step towards a mature
system for insights. In future work, we will build on the methods
described here. Also, we will investigate more on relationships
between research events and methodologies to present these
relationships. We believe some of the more interesting research
problems will be identified when new relationships between
research events are used to help users to gain insights.
ACKNOWLEDGMENTS
We would thank Punit Gupta and Rachel L. Knickmeyer for their
help on timeline and map components of ResearchExplorer.
REFERENCES
[1] ACM Digital Library, http://portal.acm.org/portal.cfm.
[2] BBN Technologies (1999), OpenMapTM Open Systems
Mapping Technoloy, http://openmap.bbn.com/.
[3] Berkeley DB XML,
http://www.sleepycat.com/products/xml.shtml.
[4] Boll, S., and Westermann, U. Medither -- an Event Space
for Context-Aware Multimedia Experiences. Proc. of the
2003 ACM SIGMM Workshop on Experiential Telepresence
(ETP '03), 21-30.
[5] Brin, S., and Page, L. The Anatomy of a Large-Scale
Hypertextual Web Search Engine. Proc. of 7
th
International
World Wide Web Conference (WWW '98), 107-117.
[6] Cho, J., Garcia-Monila, H., and Page, L. Efficient Crawling
through URL Ordering. Proc. of 7
th
WWW Conference
(1998), 161-172.
[7] CiteSeer, http://citeseer.ist.psu.edu/cis.
[8] Giles, C. L., Bollacker, K. D., and Lawrence, S. CiteSeer: An
Automatic Citation Indexing System. The Third ACM
Conference on Digital Libraries (1998), 89-98.
[9] Google,
http://www.google.com.
[10] IEEE Xplore, http://ieeexplore.ieee.org/Xplore/DynWel.jsp.
[11] INSPEC, http://www.iee.org/Publish/INSPEC/.
[12] Jain, R. Experiential Computing. Communications of the
ACM, 46, 7 (July 2003), 48-54.
[13] Jain, R., Kim, P., and Li, Z. Experiential Meeting System.
Proc. of the 2003 ACM SIGMM Workshop on Experiential
Telepresence (ETP '03), 1-12.
[14] Rowe, N. C. Marie-4: A High-Recall, Self-Improving Web
Crawler that Finds Images using Captions. IEEE Intelligent
Systems, 17, 4 (2002), 8-14.
[15] Shkapenyuk, V., and Suel, T. Design and Implementation of
a High-Performance Distributed Web Crawler. Proc. of the
Intl. Conf. on Data Engineering (ICDE '02).
[16] Singh, R., Knickmeyer, R. L., Gupta, P., and Jain, R.
Designing Experiential Environments for Management of
Personal Multimedia. ACM Multimedia 2004. To Appear.
[17] Singh, R., Li, Z., Kim, P., and Jain, R. Event-Based
Modeling and Processing of Digital Media. 1
st
ACM
SIGMOD Workshop on Computer Vision Meets Databases,
Paris, France, 2004.
[18] Teng, S-H., Lu, Q., and Eichstaedt, M. Collaborative Web
Crawling: Information Gathering/Processing over Internet.
Proc. Of the 32
nd
Hawaii Intl. Conf. on System Sciences
(1999).
[19] Toyama, K., Logan, R., Roseway, A., and Anandan, P.
Geographic Location Tags on Digital Images. ACM
Multimedia (2003), 156-166.
[20] Wactlar, H. D., Christel, M. G., Hauptmann A. G., and Gong,
Y. Informedia Experience-on-Demand: Capturing,
Integrating and Communicating Experiences across People,
Time and Space. ACM Computing Surveys 31(1999).
[21] WordNet 2.0 http://www.cogsci.princeton.edu/cgi-bin/webwn
.
Table 1. Comparisons of ResearchExplorer and other systems
Functions ResearchExplorer
Google
CiteSeer ACM
Digital
Library
Show spatio-temporal
relationships
Yes No No Can only list results by
date order.
Same query and
presentation space?
Yes No No No
Dynamic query
Yes
No
No
No
Maintain query state
Yes
No
No
No
Zoom-in/zoom-out Yes
No
No
No
Visualization
techniques
Multiple
Listing only
Listing only
Listing only
14 | Event;Research Event;Multimedia Data;Spatio-Temporal Data;Exploration;Interaction Environment;Insight |
169 | Robustness Analysis of Cognitive Information Complexity Measure using Weyuker Properties | Cognitive information complexity measure is based on cognitive informatics, which helps in comprehending the software characteristics. For any complexity measure to be robust, Weyuker properties must be satisfied to qualify as good and comprehensive one. In this paper, an attempt has also been made to evaluate cognitive information complexity measure in terms of nine Weyuker properties, through examples. It has been found that all the nine properties have been satisfied by cognitive information complexity measure and hence establishes cognitive information complexity measure based on information contained in the software as a robust and well-structured one. | Introduction
Many well known software complexity measures have been
proposed such as McCabe's cyclomatic number [8], Halstead
programming effort[5], Oviedo's data flow complexity
measures[9], Basili's measure[3][4], Wang's cognitive complexity
measure[11] and others[7]. All the reported complexity measures
are supposed to cover the correctness, effectiveness and clarity of
software and also to provide good estimate of these parameters.
Out of the numerous proposed measures, selecting a particular
complexity measure is again a problem, as every measure has its
own advantages and disadvantages. There is an ongoing effort to
find such a comprehensive complexity measure, which addresses
most of the parameters of software. Weyuker[14] has suggested
nine properties, which are used to determine the effectiveness of
various software complexity measures. A good complexity
measure should satisfy most of the Weyuker's properties. A new
complexity measure based on weighted information count of a
software and cognitive weights has been developed by Kushwaha
and Misra[2]. In this paper an effort has been made to estimate this
cognitive information complexity measure as robust and
comprehensive one by evaluating this against the nine Weyuker's
properties.
Cognitive Weights of a Software
Basic control structures [BCS] such as sequence, branch and
iteration [10][13] are the basic logic building blocks of any
software and the cognitive weights (W
c
) of a software [11] is the
extent of difficulty or relative time and effort for comprehending a
given software modeled by a number of BCS's. These cognitive
weights for BCS's measure the complexity of logical structures of
the software. Either all the BCS's are in a linear layout or some
BCS's are embedded in others. For the former case, we sum the
weights of all the BCS's and for the latter, cognitive weights of
inner BCS's are multiplied with the weight of external BCS's.
Cognitive Information Complexity Measure (CICM)
Since software represents computational information and is a
mathematical entity, the amount of information contained in the
software is a function of the identifiers that hold the information
and the operators that perform the operations on the information
i.e.
Information = f (Identifiers, Operators)
Identifiers are variable names, defined constants and other labels
in a software. Therefore information contained in one line of code
is the number of all operators and operands in that line of code.
Thus in k
th
line of code the Information contained is:
I
k
= (Identifiers + Operands)
k
= (ID
k
+ OP
k
) IU
Where ID = Total number of identifiers in the k
th
LOC of
software,
OP = Total number of operators in the k
th
LOC of
software,
IU is the Information Unit representing that at least any identifier
or operator has one unit information in them.
Total Information contained in a software (ICS) is sum of
information contained in each line of code i.e.
LOCS
ICS =
(I
k
)
k=1
Where I
k
= Information contained in k
th
line of code,
LOCS = Total lines of code in the software.
Thus, it is the information contained in the identifiers and the
necessary operations carried out by the operators in achieving the
desired goal of the software, which makes software difficult to
understand.
Once we have established that software can be comprehended as
information defined in information units (IU's) [2], the weighted
information count is defined as :
The Weighted Information Count of a line of code (WICL) of a
software is a function of identifiers, operands and LOC and is
defined as :
ACM SIGSOFT Software Engineering Notes Page 1 January 2006 Volume 31 Number 1
ACM SIGSOFT Software Engineering Notes Page 1 January 2006 Volume 31 Number 1
WICL
k
= ICS
k
/ [LOCS k]
Where WIC
k
= Weighted Information Count for the k
th
line,
ICS
k
= Information contained in a software for the k
th
line.
The
Weighted Information Count of the Software (WICS)
is
defined as :
LOCS
WICS =
WICL
k
k = 1
In order to be a complete and robust measure, the measure of
complexity should also consider the internal control structure of
the software. These basic control structures have also been
considered as the Newton's law in software engineering [10, 11].
These are a set of fundamental and essential flow control
mechanisms that are used for building the logical architectures of
software.
Using the above definitions, Cognitive Information Complexity
Measure (CICM) is defined as the product of weighted
information count of software(WICS) and the cognitive weight
(W
c
) of the BCS in the software i.e.
CICM = WICS * W
c
This complexity measure encompasses all the major parameters
that have a bearing on the difficulty in comprehending software or
the cognitive complexity of the software. It clearly establishes a
relationship between difficulty in understanding software and its
cognitive complexity. It introduces a method to measure the
amount of information contained in the software thus enabling us
to calculate the coding efficiency (E
I
) as
ICS / LOCS [2].
Evaluation of Cognitive Information Complexity Measure
Weyuker[14] proposed the nine properties to evaluate any
software complexity measure. These properties also evaluate the
weakness of a measure in a concrete way. With the help of these
properties one can determine the most suitable measure among the
different available complexity measures. In the following
paragraphs, the cognitive information complexity measure has
been evaluated against the nine Weyuker properties for
establishing itself as a comprehensive measure.
Property 1: (
P)( Q)(|P| |Q|) Where P and Q are program body.
This property states that a measure should not rank all programs as
equally complex. Now consider the following two examples given
in Fig. 1 and Fig. 2. For the program given in Fig. 1 in Appendix
I, there are two control structures: a sequential and a iteration.
Thus cognitive weight of these two BCS's is 1 + 3 = 4.
Weighted information count for the above program is as under:
WICS = 3/6 + 1/4 + 6/3+ 4/2 = 4.75
Hence Cognitive information complexity measure (CICM) is:
CICM = WICS * W
c
= 4.75 * 4 = 19.0
For the program given in Fig. 2 in Appendix I there is only one
sequential structure and hence the cognitive weight W
c
is 1. WICS
for the above program is 2.27. Hence CICM for the above program
is 2.27 * 1 = 2.27.
From the complexity measure of the above two programs, it can be
seen that the CICM is different for the two programs and hence
satisfies this property.
Property 2: Let c be a non-negative number. Then there are only
finitely many programs of complexity c.
Calculation of WICS depends on the number of identifiers and
operators in a given program statement as well as on the number of
statements remaining that very statement in a given program. Also
all the programming languages consist of only finite number of
BCS's. Therefore CICM cannot rank complexity of all programs
as c. Hence CICM holds for this property.
Property 3: There are distinct programs P and Q such that |P| = |Q|.
For the program given in Fig. 3 in Appendix I, the CICM for the
program is 19, which is same as that of program in Fig. 1.
Therefore this property holds for CICM.
Property 4: (
P)( Q) (PQ & |P| |Q|)
Referring to program illustrated in Fig.1, we have replaced the
while loop by the formula "sum = (b+1)*b/2" and have illustrated
the same in Fig.2. Both the programs are used to calculate the sum
of first n integer. The CICM for both the programs is different,
thus establishing this property for CICM.
Property 5: (
P)( Q)(|P| |P;Q| and |Q| |P;Q|).
Consider the program body given in Fig.4 in Appendix I: The
program body for finding out the factorial of a number consists of
one sequential and one branch BCS's. Therefore W
c
= 3. For the
program body for finding out the prime number, there are one
sequential, one iteration and two branch BCS's. Therefore W
c
= 1
+ 2*3*2 = 13. For the main program body for finding out the
prime and factorial of the number, there are one sequential, two
call and one branch BCS's. Therefore W
c
= 1+5+15+2 = 23.
WICS for the program is 5.1. Therefore the Cognitive Information
Complexity Measure for the above program = 5.1 * 23 = 117.3.
Now consider the program given in Fig.5 in Appendix I to check
for prime. There is one sequential, one iteration and three branch
BCS's. Therefore W
c
= 1 + 2*3*2 + 2 = 15. WICS = 1.85. So
CICM = 1.85 * 15 = 27.79.
For the program given in Fig.6 in Appendix I, there is one
sequential, one iteration and one branch BCS's .
Wc for this program is 7 and WICS is .5.11. Hence CICM =
WICS * W
c
= 5.11 * 7 = 35.77.
It is clear from the above example that if we take the two-program
body, one for calculating the factorial and another for checking for
ACM SIGSOFT Software Engineering Notes Page 2 January 2006 Volume 31 Number 1
prime whose CICM are 27.79 and 35.77 that are less than 117.3.
So property 5 also holds for CICM.
Property 6(a)
: (
P)( Q)(R)(|P| = |Q|) & (|P;R|
|Q;R|)
Let P be the program illustrated in Fig.1 and Q is the program
illustrated in Fig.3. The CICM of both the programs is 19. Let R
be the program illustrated in Fig.6. Appending R to P we have the
program illustrated in Fig.7 in Appendix I.
Cognitive weight for the above program is 9 and WICS is 8.3.
Therefore CICM = 8.3*9=74.7.
Similarly appending R to Q we have W
c
= 9 and WICS = 8.925.
Therefore CICM = 8.925*9 = 80.325 and 74.7
80.325. This
proves that Property 6(a) holds for CICM.
Property 6(b): (
P)( Q)(R)(|P| = |Q|) & (|R;P| |R:Q|)
To illustrate the above property let us arbitrarily append three
program statements in the programs given in Fig.1, we have the
program given in Fig.8 in Appendix I. There is only one sequential
and one iteration BCS. Hence cognitive weight is 1 + 3 = 4. There
is only one sequential and one iteration BCS. Hence cognitive
weight is 1 + 3 = 4 and WICS = 5.58. So CICM = 5.58 * 4 =
22.32.
Similarly appending the same three statements to program in Fig.3
we again have cognitive weights = 4 and WICS = 5.29. Therefore
CICM = 21.16
22.32. Hence this property also holds for CICM.
Property 7: There are program bodies P and Q such that Q is
formed by permuting the order of the statement of P and (|P|
|Q|).
Since WICS is dependent on the number of operators and operands
in a given program statement and the number of statements
remaining after this very program statement, hence permuting the
order of statement in any program will change the value of WICS.
Also cognitive weights of BCS's depend on the sequence of the
statement[1]. Hence CICM will be different for the two programs.
Thus CICM holds for this property also.
Property 8 : If P is renaming of Q, then |P| = |Q|.
CICM is measured in numeric and naming or renaming of any
program has no impact on CICM. Hence CICM holds for this
property also.
Property 9: (
P)( Q)(|P| + |Q|) < (|P;Q|)
OR
(
P)( Q)(R)(|P| + |Q| + |R|) < (|P;Q;R|)
For the program illustrated in Fig.4, if we separate the main
program body P by segregating Q (prime check) and R (factorial),
we have the program illustrated in Fig.9 as shown in Appendix I.
The above program has one sequential and one branch BCS. Thus
cognitive weight is 4 and WICS is 1.475. Therefore CICM =
4.425. Hence 4.425 + 27.79 + 35.77 < 117.3. This proves that
CICM also holds for this property.
Comparative Study of Cognitive Information Complexity Measure and Other Measures in Terms of Weyuker Properties
In this section cognitive information complexity measure has been
compared with other complexity measures in terms of all nine
Weyuker's properties.
P.N.- Property Number, S.C.- Statement Count, C N. -
Cyclomatic Number, E.M.- Effort Measure, D.C.-Dataflow
Complexity, C.C.M. - Cognitive Complexity Measure, CICM
Cognitive Information Complexity Measure, Y- Yes, N NO
P
N
SC CN EM DC CCM CICM.
1
Y Y Y Y Y Y
2
Y N Y N Y Y
3
Y Y Y Y Y Y
4
Y Y Y Y Y Y
5
Y Y N N Y Y
6
N N Y Y N Y
7
N N N Y Y Y
8
Y Y Y Y Y Y
9
N N Y Y Y Y
Table 1: Comparison of complexity measures with Weyuker
properties.
It may be observed from the table 1 that complexity of a program
using effort measure, data flow measure and Cognitive
Information Complexity Measure depend directly on the
placement of statement and therefore all these measures hold for
property 6 also. All the complexity measure intend to rank all the
programs differently
Conclusion
Software complexity measures serves both as an analyzer and a
predicator in quantitative software engineering. Software quality is
defined as the completeness, correctness, consistency, no
misinterpretation, and no ambiguity, feasible verifiable in both
specification and implementation. For a good complexity measure
it is very necessary that the particular complexity measure not only
satisfies the above-mentioned property of software quality but also
satisfies the nine Weyuker properties. The software complexity in
terms of cognitive information complexity measure thus has been
established as a well- structured complexity measure
.
References
[1] Misra,S and Misra,A.K.(2005): Evaluating Cognitive
Complexity measure with Weyuker Properties, Proceeding of the3
rd
IEEE International Conference on Cognitive
Informatics(ICCI'04).
ACM SIGSOFT Software Engineering Notes Page 3 January 2006 Volume 31 Number 1
[2] Kushwaha,D.S and Misra,A.K.(2005): A Modified Cognitive
Information Complexity Measure of Software, Proceeding of the
7
th
International Conference on Cognitive Systems(ICCS'05)
(accepted for presentation)
[3] Basili,V.R. and Phillips(1983): T.Y,Metric analysis and data
validation across fortran projection .IEEE Trans.software Eng.,SE
9(6):652-663,1983.
[4] Basili,V.R.(1980): Qualitative software complexity model: A
summary in tutorial on models and method for software
management and engineering .IEEE Computer Society Press ,Los
Alamitos,CA,1980
.
[5] Halstead,M.(1977): Elements of software science,Elsevier
North Holland,New York.1997.
[6] Klemola, T. and Rilling, J.(2003): A Cognitive Complexity
Metric Based on Category Learning, , Proceeding of the 2
nd
IEEE
International Conference on Cognitive Informatics(ICCI'03).
[7] Kearney, J.K., Sedlmeyer, R. L., Thompson, W.B., Gray, M.
A. and Adler, M. A.(1986): Software complexity measurement.
ACM Press, Newyork, 28:1044-1050,1986
[8] McCabe, T.A.(1976): Complexity measure. IEEE
trans.software engg.,(se-2,6):308-320,1976.
[9] Oviedo, E.(1980): Control flow, data and program complexity
.in Proc. IEEE COMPSAC, Chicago, lL, pages 146-152,November
1980.
[10] Wang. and Shao,J.(2002): On cognitive informatics, keynote
lecture, proceeding of the .1
st
IEEE International Conference on
Cognitive Informatics, pages 34-42,August 2002.
[11] Wang ,Y .and Shao,J.(2004): Measurement of the Cognitive
Functional Complexity of Software, Proceeding of the 3
rd
IEEE
International Conference on Cognitive Informatics(ICCI'04)
[12] Wang,Y .and Shao,J.(2003): On cognitive informatics,
Proceeding of the 2
nd
IEEE International Conference on Cognitive
Informatics(ICCI'03),London,England,IEEE CS Press, pages 67-71
,August 2003 .
[13] Wang, Y.(2002): The real time process algebra (rtpa). Annals
of Software Engineering, an international journal, and 14:235-247
,2002.
[14] Weyuker, E.(1988): Evaluating software complexity measure.
IEEE Transaction on Software Complexity Measure, 14(9): 1357-1365
,september1988.
ACM SIGSOFT Software Engineering Notes Page 4 January 2006 Volume 31 Number 1
Appendix I
/*Calculate the sum of first n integer*/
main() {
int i, n, sum=0;
printf("enter the number"); //BCS1
scanf("%d" , &n);
for (i=1;i<=n;i++) //BCS2
sum=sum+i;
printf("the sum is %d" ,sumssss);
getch();}
Fig. 1 : Source code of the sum of first n integers.
main()
{
int b;
int sum = 0;
Printf("Enter the Number");
Scanf("%d", &n);
Sum = (b+1)*b/2;
Printf("The sum is %d",sum);
getch();
}
Fig. 2 : Source code to calculate sum of first n integers.
# define N 10
main( )
{
int count
float, sum,average,number;
sum = count =0;
while (count < N )
{
scanf (" %f",& number);
sum = sum+ number;
count = count+1;
}
average = sum / N;
printf ("Average =%f",average);
}
Fig. 3 : Source code to calculate the average of a set of N
numbers.
#include< stdio.h >
#include< stdlib.h >
int main() {
long fact(int n);
int isprime(int n);
int n;
long int temp;
clrscr();
printf("\n input the number"); //BCS11
scanf("%d",&n);
temp=fact(n); //BCS12
{printf("\n is prime");}
int flag1=isprime(n); //BCS13
if (flag1= =1) //BCS14
else
{printf("\n is not prime")};
printf("\n factorial(n)=%d",temp);
getch();
long fact(int n) {
long int facto=1; //BCS21
if (n= =0) //BCS22
facto=1;else
facto=n*fact(n-1);
return(facto); }
int isprime(int n)
{ int flag; //BCS31
if (n= =2)
flag=1; //BCS32
else
for (int i=2;i<n;i++) //BCS33
{ if (n%i= =0) //BCS34
{ flag=0; Therefore Wc = 3
break; }
else {
flag=1 ;}}
return (flag);}}
Fig. 4: Source code to check prime number and to calculate
factorial of the number
#include< stdio.h >
#include< stdlib.h >
#include< conio.h >
int main() { //BCS1
int flag = 1,n;
clrscr();
printf("\ n enter the number");
scanf("%d",&n);
if (n= =2)
flag=1; //BCS21
else
{for (int i=2;i<n;i++) //BCS22
if (n%i= =0) //BCS23
{ flag=0;
break;}
else{
flag=1;
continue;} }
if(flag) //BCS3
printf("the number is prime");
else
printf("the number is not prime");
grtch();}
Fig.5 : Source code for checking prime number
#include< stdio.h >
#include< stdlib.h >
#include< conio.h >
int main () {
long int fact=1;
int n;
clrscr();
ACM SIGSOFT Software Engineering Notes Page 5 January 2006 Volume 31 Number 1
printf("\ input the number"); //BCS1
scanf("%d",&n);
if (n==0) //BCS21
else
for(int i=n;i>1;i--) //BCS22
fact=fact*i;
printf("\n factorial(n)=%1d",fact);
getch();}
Fig.6 : Source code for calculating factorial of a number
Int main() {
long fact(int n);
int i, n, sum=0;
printf("enter the number");
scanf("%d" , &n);
temp = fact(n);
for (i=1;i<=n;i++)
sum=sum+i;
printf("the sum is %d" ,sum);
getch();
long fact(int n){
long int facto = 1;
if (n == 0)
facto = 1 else
facto = n*fact(n-1);
return(facto);}}
Fig.7: Source code of sum of first n integer and factorial of n.
main() {
int a,b,result;
result = a/b;
printf(the result is %d",result);
int i, n, sum=0;
printf("enter the number");
scanf("%d" , &n);
for (i=1;i<=n;i++)
sum=sum+i;
printf("the sum is %d" ,sum);
getch();}
Fig. 8 : Source code of division and the sum of first
n integers.
int main(){
int n;
long int temp;
clrscr();
printf("\n input the number");
scanf("%d",&n);
temp = fact(n);
{printf("\ is prime");}
int flag1 = isprime(n);
if (flag1 == 1)
else
{printf("\n is not prime)};
printf("\n factorial(n) = %d",temp);
getch();}
Fig.9 : Source code of main program body of program in Fig.4
ACM SIGSOFT Software Engineering Notes Page 6 January 2006 Volume 31 Number 1 | cognitive weight;cognitive information complexity measure;basic control structures;cognitive information complexity unit;Weighted information count |
17 | A Pseudo Random Coordinated Scheduling Algorithm for Bluetooth Scatternets | The emergence of Bluetooth as a default radio interface allows handheld devices to be rapidly interconnected into ad hoc networks. Bluetooth allows large numbers of piconets to form a scatternet using designated nodes that participate in multiple piconets. A unit that participates in multiple piconets can serve as a bridge and forwards traffic between neighbouring piconets. Since a Bluetooth unit can transmit or receive in only one piconet at a time, a bridging unit has to share its time among the different piconets. To schedule communication with bridging nodes one must take into account their availability in the different piconets, which represents a difficult , scatternet wide coordination problem and can be an important performance bottleneck in building scatternets. In this paper we propose the Pseudo-Random Coordinated Scatternet Scheduling (PCSS) algorithm to perform the scheduling of both intra and inter-piconet communication. In this algorithm Bluetooth nodes assign meeting points with their peers such that the sequence of meeting points follows a pseudo random process that is different for each pair of nodes. The uniqueness of the pseudo random sequence guarantees that the meeting points with different peers of the node will collide only occasionally. This removes the need for explicit information exchange between peer devices, which is a major advantage of the algorithm. The lack of explicit signaling between Bluetooth nodes makes it easy to deploy the PCSS algorithm in Bluetooth devices, while conformance to the current Bluetooth specification is also maintained. To assess the performance of the algorithm we define two reference case schedulers and perform simulations in a number of scenarios where we compare the performance of PCSS to the performance of the reference schedulers. | INTRODUCTION
Short range radio technologies enable users to rapidly interconnect
handheld electronic devices such as cellular phones, palm devices
or notebook computers. The emergence of Bluetooth [1] as default
radio interface in these devices provides an opportunity to turn
them from stand-alone tools into networked equipment. Building
Bluetooth ad hoc networks also represents, however, a number of
new challenges, partly stemming from the fact that Bluetooth was
originally developed for single hop wireless connections. In this
paper we study the scheduling problems of inter-piconet communication
and propose a lightweight scheduling algorithm that Bluetooth
nodes can employ to perform the scheduling of both intra and
inter-piconet communication.
Bluetooth is a short range radio technology operating in the unlicensed
ISM (Industrial-Scientific-Medical) band using a frequency
hopping scheme. Bluetooth (BT) units are organized into piconets.
There is one Bluetooth device in each piconet that acts as the master
, which can have any number of slaves out of which up to seven
can be active simultaneously. The communication within a piconet
is organized by the master which polls each slave according to some
polling scheme. A slave is only allowed to transmit in a slave-to
-master slot if it has been polled by the master in the previous
master-to-slave slot. In Section 3 we present a brief overview of
the Bluetooth technology.
A Bluetooth unit can participate in more than one piconet at any
time but it can be a master in only one piconet. A unit that participates
in multiple piconets can serve as a bridge thus allowing
the piconets to form a larger network. We define bridging degree
as the number of piconets a bridging node is member of. A set
of piconets that are all interconnected by such bridging units is referred
to as a scatternet network (Figure 1). Since a Bluetooth unit
can transmit or receive in only one piconet at a time, bridging units
must switch between piconets on a time division basis. Due to the
fact that different piconets are not synchronized in time a bridging
unit necessarily loses some time while switching from one piconet
to the other. Furthermore, the temporal unavailability of bridging
nodes in the different piconets makes it difficult to coordinate the
communication with them, which impacts throughput and can be
an important performance constraint in building scatternets.
There are two important phenomena that can reduce the efficiency
of the polling based communication in Bluetooth scatternets:
slaves that have no data to transmit may be unnecessarily
polled, while other slaves with data to transmit may have to
wait to be polled; and
at the time of an expected poll one of the nodes of a master-slave
node pair may not be present in the piconet (the slave
that is being polled is not listening or the master that is expected
to poll is not polling).
The first problem applies to polling based schemes in general, while
the second one is specific to the Bluetooth environment. In order
to improve the efficiency of inter-piconet communication the
scheduling algorithm has to coordinate the presence of bridging
nodes in the different piconets such that the effect of the second
phenomenon be minimized.
However, the scheduling of inter-piconet communication expands
to a scatternet wide coordination problem. Each node that has more
than one Bluetooth links have to schedule the order in which it communicates
with its respective neighbours. A node with multiple
Bluetooth links can be either a piconet master or a bridging node or
both. The scheduling order of two nodes will mutually depend on
each other if they have a direct Bluetooth link in which case they
have to schedule the communication on their common link for the
same time slots. This necessitates some coordination between the
respective schedulers. For instance in Figure 1 the scheduling order
of node A and the scheduling order of its bridging neighbours, B,
C, D and E mutually depend on each other, while nodes D and E
further effects nodes F, G and H as well. Furthermore, the possible
loops in a scatternet (e.g., A-E-G-H-F-D) makes it even more
complicated to resolve scheduling conflicts.
In case of bursty traffic in the scatternet the scheduling problem
is further augmented by the need to adjust scheduling order in response
to dynamic variation of traffic intensity. In a bursty traffic
environment it is desirable that a node spends most of its time on
those links that have a backlogged burst of data.
One way to address the coordination problem of inter-piconet
scheduling is to explicitly allocate, in advance, time slots for communication
in each pair of nodes. Such a hard coordination approach
eliminates ambiguity with regards to a node's presence in
piconets, but it implies a complex, scatternet wide coordination
problem and requires explicit signaling between nodes of a scatternet
. In the case of bursty traffic, hard coordination schemes
generate a significant computation and signaling overhead as the
communication slots have to be reallocated in response to changes
in traffic intensity and each time when a new connection is established
or released.
In this paper we propose the Pseudo-Random Coordinated Scatternet
Scheduling algorithm which falls in the category of soft coordination
schemes. In soft coordination schemes nodes decide their
presence in piconets based on local information. By nature, soft coordination
schemes cannot guarantee conflict-free participation of
bridging nodes in the different piconets, however, they have a significantly
reduced complexity. In the PCSS algorithm coordination
is achieved by implicit rules in the communication without the need
of exchanging explicit control information. The low complexity of
the algorithm and its conformance to the current Bluetooth specification
allow easy implementation and deployment.
The first key component of the algorithm is the notion of checkpoints
which are defined in relation to each pair of nodes that
are connected by a Bluetooth link and which represent predictable
points in time when packet transmission can be initiated on the particular
link. In other words, checkpoints serve as regular meeting
points for neighboring nodes when they can exchange packets. In
order to avoid systematic collision of checkpoints on different links
of a node the position of checkpoints follows a pseudo random sequence
that is specific to the particular link the checkpoints belong
to.
The second key component of the algorithm is the dynamic adjustment
of checking intensity, which is necessary in order to effec-tively
support bursty data traffic. Bandwidth can be allocated and
deallocated to a particular link by increasing and decreasing checkpoint
intensity, respectively.
To assess the performance of the algorithm we define two reference
schedulers and relate the performance of the PCSS scheme to these
reference algorithms in a number of simulation scenarios.
The remainder of the paper is structured as follows. In Section 2 we
give an overview of related work focusing on Bluetooth scheduling
related studies available in the literature. Section 3 gives a brief
overview of the Bluetooth technology. In Section 4 and 5 we introduce
the proposed algorithm. In Section 6 we define the reference
schedulers. Finally, in Section 7 we present simulation results.
RELATED WORK
A number of researchers have addressed the issue of scheduling in
Bluetooth. Most of these studies have been restricted, however, to
the single piconet environment, where the fundamental question is
the polling discipline used by the piconet master to poll its slaves.
These algorithms are often referred to as intra-piconet scheduling
schemes. In [7] the authors assume a simple round robin polling
scheme and investigate queueing delays in master and slave units
depending on the length of the Bluetooth packets used. In [5] Johansson
et al. analyze and compare the behavior of three different
polling algorithms. They conclude that the simple round robin
scheme may perform poorly in Bluetooth systems and they propose
a scheme called Fair Exhaustive Polling. The authors demonstrate
the strength of this scheme and argue in favor of using multi-slot
packets. Similar conclusions are drawn by Kalia et al. who argue
that the traditional round robin scheme may result in waste and un-fairness
[8]. The authors propose two new scheduling disciplines
that utilize information about the status of master and slave queues.
In [9, 10] the authors concentrate on scheduling policies designed
with the aim of low power consumption. A number of scheduling
policies are proposed which exploit either the park or sniff low
power modes of Bluetooth.
194
Although the above studies have revealed a number of important
performance aspects of scheduling in Bluetooth piconets, the algorithms
developed therein are not applicable for inter-piconet communication
. In [6] the authors have shown that constructing an optimal
link schedule that maximizes total throughput in a Bluetooth
scatternet is an NP hard problem even if scheduling is performed
by a central entity. The authors also propose a scheduling algorithm
referred to as Distributed Scatternet Scheduling Algorithm
(DSSA), which falls in the category of distributed, hard coordination
schemes. Although the DSSA algorithm provides a solution
for scheduling communication in a scatternet, some of its idealized
properties (e.g., nodes are aware of the traffic requirements of their
neighbours) and its relatively high complexity make it difficult to
apply it in a real life environment.
There is an ongoing work in the Personal Area Networking (PAN)
working group of the Bluetooth Special Interest Group (SIG) [2] to
define an appropriate scheduling algorithm for Bluetooth scatternets
BLUETOOTH BACKGROUND
Bluetooth is a short range radio technology that uses frequency
hopping scheme, where hopping is performed on 79 RF channels
spaced 1 MHz apart. Communication in Bluetooth is always between
master and slave nodes. Being a master or a slave is only
a logical state: any Bluetooth unit can be a master or a slave.
The Bluetooth system provides full-duplex transmission based on
slotted Time Division Duplex (TDD) scheme, where each slot is
0.625 ms long. Master-to-slave transmission always starts in an
even-numbered time slot, while slave-to-master transmission always
starts in an odd-numbered time slot. A pair of master-to-slave
and slave-to-master slots are often referred to as a frame. The communication
within a piconet is organized by the master which polls
each slave according to some polling scheme. A slave is only allowed
to transmit in a slave-to-master slot if it has been polled by
the master in the previous master-to-slave slot. The master may
or may not include data in the packet used to poll a slave. Bluetooth
packets can carry synchronous data (e.g., real-time traffic) on
Synchronous Connection Oriented (SCO) links or asynchronous
data (e.g., elastic data traffic, which is the case in our study) on
Asynchronous Connectionless (ACL) links. Bluetooth packets on
an ACL link can be 1, 3 or 5 slot long and they can carry different
amount of user data depending on whether the payload is FEC
coded or not. Accordingly, the Bluetooth packet types DH1, DH3
and DH5 denote 1, 3 and 5 slot packets, respectively, where the
payload is not FEC encoded, while in case of packet types DM1,
DM3 and DM5 the payload is protected with FEC encoding. There
are two other types of packets, the POLL and NULL packets that do
not carry user data. The POLL packet is used by the master when
it has no user data to the slave but it still wants to poll it. Similarly,
the NULL packet is used by the slave to respond to the master if it
has no user data. For further information regarding the Bluetooth
technology the reader is referred to [1, 3].
OVERVIEW OF THE PCSS ALGORITHM
Coordination in the PCSS algorithm is achieved by the unique
pseudo random sequence of checkpoints that is specific to each
master-slave node pair and by implicit information exchange between
peer devices. A checkpoint is a designated Bluetooth frame.
The activity of being present at a checkpoint is referred to as to
check. A master node actively checks its slave by sending a packet
to the slave at the corresponding checkpoint and waiting for a response
from the slave. The slave node passively checks its master
by listening to the master at the checkpoint and sending a response
packet in case of being addressed.
The expected behaviour of nodes is that they show up at each
checkpoint on all of their links and check their peers for available
user data. The exchange of user data packets started at a checkpoint
can be continued in the slots following the checkpoint. A
node remains active on the current link until there is user data in
either the master-to-slave or slave-to-master directions or until it
has to leave for a next checkpoint on one of its other links. In
the PCSS scheme we exploit the concept of randomness in assigning
the position of checkpoints, which excludes the possibility that
checkpoints on different links of a node will collide systematically,
thus giving the node an equal chance to visit all of its checkpoints.
The pseudo random procedure is similar to the one used to derive
the pseudo random frequency hopping sequence. In particular, the
PCSS scheme assigns the positions of checkpoints on a given link
following a pseudo random sequence that is generated based on the
Bluetooth clock of the master and the MAC address of the slave.
This scheme guarantees that the same pseudo random sequence
will be generated by both nodes of a master-slave pair, while the sequences
belonging to different node pairs will be different. Figure
2 shows an example for the pseudo random arrangement of checkpoints
in case of a node pair A and B. The length of the current base
checking interval is denoted by
T
(i)
check
and the current checking intensity
is defined accordingly as
1
T
(i)
check
. There is one checkpoint
within each base checking interval and the position of the checkpoint
within this window is changing from one time window to the
other in a pseudo random manner.
checkpoints of A toward B
checkpoints of B toward A
1 frame
T
(i)
check
Figure 2: Pseudo-random positioning of checkpoints
Since the pseudo random sequence is different from one link to another
, checkpoints on different links of a node will collide only occasionally
. In case of collision the node can attend only one of the
colliding checkpoints, which implies that the corresponding neighbours
have to be prepared for a non-present peer. That is, the master
might not poll and the slave might not listen at a checkpoint.
We note that a collision occurs either if there are more than one
checkpoints scheduled for the same time slot or if the checkpoints
are so close to each other that a packet transmission started at the
first checkpoint necessarily overlaps the second one. Furthermore,
if the colliding checkpoints belong to links in different piconets,
the necessary time to perform the switch must be also taken into
account.
During the communication there is the possibility to increase or
decrease the intensity of checkpoints depending on the amount of
user data to be transmitted and on the available capacity of the
node. According to the PCSS algorithm a node performs certain
traffic measurements at the checkpoints and increases or decreases
the current checking intensity based on these measurements. Since
195
nodes decide independently about the current checking intensity
without explicit coordination, two nodes on a given link may select
different base checking periods. In order to ensure that two nodes
with different checking intensities on the same link can still communicate
we require the pseudo random generation of checkpoints
to be such that the set of checkpoint positions at a lower checking
intensity is a subset of checkpoint positions at any higher checking
intensities. In the Appendix we are going to present a pseudo random
scheme for generating the position of checkpoints, which has
the desired properties.
OPERATION OF PCSS
In what follows, we describe the procedures of the PCSS algorithm.
We start by the initialization process which ensures that two nodes
can start communication as soon as a new link has been established
or the connection has been reset. Next, we describe the rules that
define how nodes calculate their checkpoints, decide upon their
presence at checkpoints and exchange packets. Finally, we present
the way neighboring nodes can dynamically increase and decrease
of checkpoint intensity.
5.1
Initialization
In the PCSS algorithm there is no need for a separate initialization
procedure to start communication, since the pseudo random generation
of checkpoints is defined such that once a master slave node
pair share the same master's clock and slave's MAC address information
, it is guaranteed that the same pseudo random sequence will
be produced at each node. That is, it is guaranteed that two nodes
starting checkpoint generation at different time instants with different
checking intensities will be able to communicate. It is the own
decision of the nodes to select an appropriate initial checking intensity
, which may depend for example on the free capacities of the
node or on the amount of data to transmit. Once the communication
is established the increase and decrease procedures will adjust the
possibly different initial checking intensities to a common value.
5.2
Communication
A pair of nodes can start exchanging user data packets at a checkpoint
, which can expand through the slots following the checkpoint.
The nodes remain active on the current link following a checkpoint
until there is user data to be transmitted or one of them has to
leave in order to attend a checkpoint on one of its other links. After
a POLL/NULL packet pair has been exchanged indicating that
there is no more user data left the nodes switch off their transmit-ters/receivers
and remain idle until a next checkpoint comes on one
of their links. However, during the communication any of the nodes
can leave in order to attend a coming checkpoint on one of its other
links. After one of the nodes has left the remaining peer will realize
the absence of the node and will go idle until the time of its next
checkpoint. If the master has left earlier the slave will realize the
absence of the master at the next master-to-slave slot by not receiving
the expected poll. In the worst case the master has left before
receiving the last packet response from the slave, which can be a 5
slot packet in which case the slave wastes 5+1 slots before realizing
the absence of the master. Similarly, if the master does not get
a response from the slave it assumes that the slave has already left
the checkpoint and goes idle until its next checkpoint. Note that the
master may also waste 5+1 slots in the worst case before realizing
the absence of the slave.
A node stores the current length of the base checking interval and
the time of the next checkpoint for each of its Bluetooth links separately
. For its
i
th
link a node maintains the variable
T
(i)
check
to
store the length of the current base checking period in number of
frames and the variable
t
(i)
check
, which stores the Bluetooth clock
of the master at the next checkpoint. After passing a checkpoint
the variable
t
(i)
check
is updated to the next checkpoint by running
the pseudo random generator (
P seudoChkGen) with the current
value of the master's clock
t
(i)
and the length of the base checking
period
T
(i)
check
and with the MAC address of the slave
A
(i)
slave
as input
parameters;
t
(i)
check
= P seudoChkGen(T
(i)
check
, A
(i)
slave
, t
(i)
).
The procedure
P seudoChkGen is described in the Appendix.
There is a maximum and minimum checking interval
T
max
=
2
f
max
and
T
min
= 2
f
min
, respectively. The length of the checking
period must be a power of 2 number of frames and it must take
a value from the interval
[2
f
min
, 2
f
max
].
5.3
Increasing and Decreasing Checking Intensity
The increase and decrease procedures are used to adjust the checking
intensity of a node according to the traffic intensity and to the
availability of the peer device. Each node decides independently
about the current checking intensity based on traffic measurements
at checkpoints.
Since the time spent by a node on a link is proportional to the ratio
of the number of checkpoints on that link and the number of checkpoints
on all links of the node, the bandwidth allocated to a link can
be controlled by the intensity of checkpoints on that link. This can
be shown by the following simple calculation.
Let us assume that the node has
L number of links and assume
further that for the base checking periods on all links of the node
it holds that
T
min
T
(i)
check
T
max
, i = 1, . . . , L. Then the
average number of checkpoints within an interval of length
T
max
is
N =
L
i
=1
T
max
T
(i)
check
, and the average time between two consecutive
checkpoints is
t = T
max
N
=
1
L
i
=1
1
T
(i)
check
,
provided that the pseudo random generator produces a uniformly
distributed sequence of checkpoints. Then, the share of link
j from
the total capacity of the node is
r
j
= 1/T
(j)
check
L
i
=1
1
T
(i)
check
.
A node has to measure the utilization of checkpoints on each of
its links separately in order to provide input to the checking intensity
increase and decrease procedures. According to the algorithm
a given checkpoint is considered to be utilized if both nodes have
shown up at the checkpoint and at least one Bluetooth packet carrying
user data has been transmitted or received. If there has not been
a successful poll at the checkpoint due to the unavailability of any
of the nodes or if there has been only a POLL/NULL packet pair
exchange but no user data has been transmitted, the checkpoint is
considered to be unutilized. We note that due to packet losses the
utilization of a given checkpoint might be interpreted differently by
the nodes. However, this does not impact correct operation of the
algorithm.
196
To measure the utilization of checkpoints
(i)
on the
i
th
link of the
node we employ the moving average method as follows. The utilization
of a checkpoint equals to 1 if it has been utilized, otherwise
it equals to 0. If the checkpoint has been utilized the variable
(i)
is updated as,
(i)
= q
uti
(i)
+ (1 - q
uti
) 1;
if the checkpoint has not been utilized it is updated as,
(i)
= q
uti
(i)
+ (1 - q
uti
) 0,
where
0 q
uti
< 1 is the time scale parameter of the moving
average method. A further parameter of the utilization measurement
is the minimum number of samples that have to be observed
before the measured utilization value is considered to be confident
and can be used as input to decide about increase and decrease of
checking intensity. This minimum number of samples is a denoted
by
N
sample,min
.
Finally, a node also has to measure its total utilization, which is
defined as the fraction of time slots where the node has been active
(transmitted or received) over the total number of time slots. To
measure the total utilization of a node we employ the moving average
method again. Each node measures its own utilization
(node)
and updates the
(node)
variable after each
N
uti,win
number of
slots as follows:
(node)
= q
(node)
uti
(node)
+ (1 - q
(node)
uti
)
(win)
,
where
(win)
is the fraction of time slots in the past time window
of length
N
uti,win
where the node has been active over the total
number of time slots
N
uti,win
.
If the utilization of checkpoints on link
i falls below the lower
threshold
lower
, the current base checking period
T
(i)
check
will be
doubled. Having a low checkpoint utilization can be either because
one or both of the nodes have not shown up at all of the checkpoints
or because there is not enough user data to be transmitted. In either
cases the intensity of checkpoints has to be decreased. Whenever a
decrease or increase is performed on link
i the measured utilization
(i)
must be reset.
Since the parameter
T
(i)
check
is one of the inputs to the pseudo random
checkpoint generation process,
P seudoChkGen the checkpoints
after the decrease will be generated according to the new
period. Furthermore, due to the special characteristic of the checkpoint
generation scheme the remaining checkpoints after the decrease
will be a subset of the original checkpoints, which guarantees
that the two nodes can sustain communication independent of
local changes in checking intensities.
An example for the checking intensity decrease in case of a node
pair A and B is shown in Figure 3. First, node A decreases checking
intensity by doubling its current base checking period in response
to the measured low utilization. As a consequence node B
will find node A on average only at every second checkpoint and
its measured utilization will decrease rapidly. When the measured
utilization at node B falls below the threshold
lower
, B realizes
that its peer has a lower checking intensity and follows the decrease
by doubling its current base checking period. Although we
have not explicitly indicated in the Figure, it is assumed that there
has been user data exchanged at each checkpoint where both nodes
were present.
=0.35<
lower
=0.36<
lower
node A reduces the checking
intensity, by doubling its base period
checkpoints of B toward A
checkpoints of A toward B
doubles its base period
node B realizes the decrease and
=0.6=0.5
=0.5
=0.2
=0.7
=0.48 =0.56 =0.46
=0.5
=0.58
=0.35
=0.35
=0.65
=0.2
Figure 3: Checking intensity decrease
Recall from the utilization measurement procedure that there is a
minimum number of checkpoints
N
sample,min
that has to be sam-pled
before the measured utilization is considered to be confident
and can be used to decide about checking intensity decrease. The
parameter
N
sample,min
together with the parameter of the moving
average method
q
uti
determine the time scale over which the
utilization of checkpoints has to be above the threshold
lower
,
otherwise the node decreases checking intensity. It might be also
reasonable to allow that the parameter
N
sample,min
and the moving
average parameter
q
uti
can be changed after each decrease or
increase taking into account for example the current checking intensity
, the available resources of the node or the amount of user
data to be transmitted, etc. However, in the current implementation
we apply fixed parameter values.
After a checkpoint where user data has been exchanged (not only a
POLL/NULL packet pair) checking intensity can be increased provided
that the measured utilization of checkpoints exceeds the upper
threshold
upper
and the node has available capacity. Formally
a checking intensity increase is performed on link
i if the following
two conditions are satisfied:
(i)
>
upper
and
(node)
<
(node)
upper
,
where
(node)
upper
is the upper threshold of the total utilization of the
node. This last condition ensures that the intensity of checkpoints
will not increase unbounded. The intensity of checkpoints is doubled
at each increase by dividing the current length of the base
checking period
T
(i)
check
by 2. For typical values of
upper
we recommend
0.8
upper
0.9 in which case the respective
lower
value should be
lower
0.4 in order to avoid oscillation of increases
and decreases.
Figure 4 shows an example where node A and B communicate and
after exchanging user data at the second checkpoint both nodes
double the checking intensity. In the Figure we have explicitly indicated
whether there has been user data exchanged at a checkpoint
or not.
user data
checkpoints of B toward A
checkpoints of A toward B
=0.8>
upper
=0.8>
upper
=0.7
=0.2
=0.55
=0.4
=0.55
=0.7
user data
user data
user data
=0.4
=0.2
checking intensity
both node A and B double
=0.3
=0.3
Figure 4: Checking intensity increase
197
REFERENCE ALGORITHMS
In this section we define the Ideal Coordinated Scatternet Scheduler
(ICSS) and the Uncoordinated Greedy Scatternet Scheduler
(UGSS) reference algorithms.
The ICSS algorithm represents
the "ideal" case where nodes exploit all extra information when
scheduling packet transmissions, which would not be available in a
realistic scenario. The UGSS algorithm represents the greedy case
where nodes continuously switch among their Bluetooth links in a
random order.
6.1
The ICSS Algorithm
The ICSS algorithm is a hypothetical, ideal scheduling algorithm
that we use as a reference case in the evaluation of the PCSS
scheme. In the ICSS algorithm a node has the following extra
information about its neighbours, which represents the idealized
property of the algorithm:
a node is aware of the already pre-scheduled transmissions
of its neighbours; and
a node is aware of the content of the transmission buffers of
its neighbours.
According to the ICSS algorithm each node maintains a scheduling
list, which contains the already pre-scheduled tasks of the node. A
task always corresponds to one packet pair exchange with a given
peer of the node. Knowing the scheduling list of the neighbours
allows the node to schedule communication with its neighbours
without overlapping their other communication, such that the capacity
of the nodes is utilized as much as possible. Furthermore
being aware of the content of the transmission buffers of neighbours
eliminates the inefficiencies of the polling based scheme,
since there will be no unnecessary polls and the system will be
work-conserving.
In the scheduling list of a node there is at most one packet pair
exchange scheduled in relation to each of its peers, provided that
there is a Bluetooth packet carrying user data either in the transmission
buffer of the node or in the transmission buffer of the peer
or in both. After completing a packet exchange on a given link the
two nodes schedule the next packet exchange, provided that there
is user data to be transmitted in at least one of the directions. If
there is user data in only one of the directions, a POLL or NULL
packet is assumed for the reverse direction depending on whether
it is the master-to-slave or slave-to-master direction, respectively.
The new task is fitted into the scheduling lists of the nodes using
a first fit strategy. According to this strategy the task is fitted into
the first time interval that is available in both of the scheduling lists
and that is long enough to accommodate the new task. Note that the
algorithm strives for maximal utilization of node capacity by trying
to fill in the unused gaps in the scheduling lists.
If there is no more user data to be transmitted on a previously busy
link, the link goes to idle in which case no tasks corresponding to
the given link will be scheduled until there is user data again in at
least one of the directions.
An example for the scheduling lists of a node pair A and B is shown
in Figure 5. The tasks are labeled with the name of the corresponding
peers the different tasks belong to. Each node has as many
pre-scheduled tasks in its scheduling list as the number of its active
Bluetooth links. A link is considered to be active if there is
schedule the next packet pair
exchange for node A and B
scheduling list of
node A
current time
scheduling list of
node B
t
t
peer B
peer A
peer A
peer B
peer E
peer E
peer C
peer D
peer C
Figure 5: Example for the scheduling lists of a node pair in case
of the ICSS algorithm
user data packet in at least one of the directions. Node A has active
peers B and C, while node B has active peers A, D and E. After
node A and B have finished the transmission of a packet pair they
schedule the next task for the nearest time slots that are available
in both of their scheduling lists and the number of consecutive free
time slots is greater than or equal to the length of the task.
6.2
The UGSS Algorithm
In the UGSS algorithm Bluetooth nodes do not attempt to coordinate
their meeting points, instead each node visits its neighbours
in a random order. Nodes switch continuously among their Bluetooth
links in a greedy manner. If the node has
n number of links it
chooses each of them with a probability of
1/n. The greedy nature
of the algorithm results in high power consumption of Bluetooth
devices.
If the node is the master on the visited link it polls the slave by
sending a packet on the given link. The type of Bluetooth packet
sent can be a 1, 3 or 5 slot packet carrying useful data or an empty
POLL packet depending on whether there is user data to be transmitted
or not. After the packet has been sent the master remains
active on the link in order to receive any response from the slave.
If the slave has not been active on the given link at the time when
the master has sent the packet it could not have received the packet
and consequently it will not send a response to the master. After
the master has received the response of the slave or if it has sensed
the link to be idle indicating that no response from the salve can be
expected, it selects the next link to visit randomly.
Similar procedure is followed when the node is the slave on the
visited link. The slave tunes its receiver to the master and listens
for a packet transmission from the master in the current master-to
-slave slot. If the slave has not been addressed by the master
in the actual master-to-slave slot it immediately goes to the next
link. However, if the slave has been addressed it remains active on
the current link and receives the packet. After having received the
packet of the master the slave responds with its own packet in the
following slave-to-master slot. After the slave has sent its response
it selects the next link to visit randomly.
SIMULATION RESULTS
First, we evaluate the algorithm in a realistic usage scenario, which
is the Network Access Point (NAP) scenario. Next we investigate
theoretical configurations and obtain asymptotical results that reveals
the scaling properties of the algorithm. For instance we investigate
the carried traffic in function of the number of forwarding
198
hops along the path and in function of bridging degree. Both in the
realistic and theoretical configurations we relate the performance of
the PCSS scheme to the performance of the ICSS and UGSS reference
algorithms. Before presenting the scenarios and simulation
results we shortly describe the simulation environment and define
the performance metrics that are going to be measured during the
simulations.
7.1
Simulation Environment
We have developed a Bluetooth packet level simulator, which is
based on the Plasma simulation environment [4]. The simulator
has a detailed model of all the packet transmission, reception procedures
in the Bluetooth Baseband including packet buffering, upper
layer packet segmentation/reassemble, the ARQ mechanism,
etc. The simulator supports all Bluetooth packet types and follows
the same master-slave slot structure as in Bluetooth. For the physical
layer we employ a simplified analytical model that captures the
frequency collision effect of interfering piconets.
In the current simulations the connection establishment procedures,
e.g., the inquiry and page procedures are not simulated in detail and
we do not consider dynamic scatternet formation either. Instead we
perform simulations in static scatternet configurations where the
scatternet topology is kept constant during one particular run of
simulation.
In the current simulations we run IP directly on top of the Bluetooth
link layer and we apply AODV as the routing protocol in the
IP layer. The simulator also includes various implementations of
the TCP protocol (we employed RenoPlus) and supports different
TCP/IP applications, from which we used TCP bulk data transfer
in the current simulations.
One of the most important user perceived performance measures is
the achieved throughput. We are going to investigate the throughput
in case of bulk TCP data transfer and in case of Constant Bit Rate
(CBR) sources.
In order to take into account the power consumption of nodes we
define activity ratio of a node,
r
act
as the fraction of time when
the node has been active over the total elapsed time; and power
efficiency,
p
ef f
as the fraction of the number of user bytes success-fully
communicated (transmitted and received) over the total time
the node has been active. The power efficiency shows the number
of user bytes that can be communicated by the node during an
active period of length 1 sec. Power efficiency can be measured
in [kbit/sec], or assuming that being active for 1 sec consumes 1
unit of energy we can get a more straightforward dimension of
[kbit/energy unit], which is interpreted as the number of bits that
can be transmitted while consuming one unit of energy.
7.2
Network Access Point Scenario
In this scenario we have a NAP that is assumed to be connected to
a wired network infrastructure and it provides network access via
its Bluetooth radio interface. The NAP acts as a master and up to 7
laptops, all acting as slaves, can connect to the NAP. Furthermore
we assume that each laptop has a Bluetooth enabled mouse and
each laptop connects to its mouse by forming a new piconet as it is
shown in Figure 6.
We simulate a bulk TCP data transfer from the NAP towards each
laptop separately. Regarding the traffic generated by the mouse we
assume that the mouse produces a 16 byte long packet each 50 ms,
NAP
laptop
max 7
laptop
mouse
mouse
Figure 6: Network Access Point Scenario
periodically. In the NAP-laptop communication we are interested
in the achieved throughput while in the laptop-mouse communication
we are concerned with the delay perceived by the mouse.
In the current scenario we switched off the dynamic checkperiod
adjustment capability of the PCSS algorithm and we set the base
checking period to 32 frames (40 ms), which is in accordance with
the delay requirement of a mouse. Note that this same base checking
period is applied also on the NAP-laptop links, although, there
is no delay requirement for the TCP traffic. However, the current
implementation in the simulator does not yet support the setting of
the base checking periods for each link separately. The dynamic
checking period adjustment would definitely improve the throughput
of NAP-laptop communication as we are going to see later in
case of other configurations.
The simulation results are shown in Figure 7. In plot (a) the averaged
throughput of NAP-laptop communications are shown in the
function of number of laptops for the different algorithms, respectively
. Graph (b) plots the sum of the throughputs between the
NAP and all laptops. As we expect, the individual laptop throughput
decreases as the number of laptops increases. However, it is
important to notice that the sum of laptop throughputs do not decrease
with increasing number of laptops in case of the PCSS and
ICSS algorithms. As the number of laptops increases the efficient
coordination becomes more important and the total carried traffic
will decrease with the uncoordinated UGSS scheme. The increase
of the total throughput in case of the PCSS algorithm is the consequence
of the fixed checking intensities, which allocates one half
of a laptop capacity to the mouse and the other half to the NAP. In
case of small number of laptops this prevents the laptops to fully
utilize the NAP capacity, which improves as the number of laptops
increases.
The
99% percentile of the delay seen by mouse packets is shown in
plot (c). The delay that can be provided with the PCSS algorithm
is determined by the base checking period that we use. Recall, that
in the current setup the base checking period of the PCSS scheme
was set to 32 frames, which implies that the delay has to be in the
order of 32 frames, as shown in the figure. The low delay with the
UGSS algorithm is due to the continuous switching among the links
of a node, which ensures high polling intensity within a piconet
and frequent switching between piconets. The UGSS algorithm
provides an unnecessarily low delay, which is less than the delay
requirement at the expense of higher power consumption.
Plots (d) and (e) show the averaged activity ratio over all laptops
and mice, respectively. The considerably higher throughput
achieved for small number of laptops by the ICSS scheme explains
its higher activity ratio. On graph (f) the averaged power efficiency
of laptops is shown, which relates the number of bytes transmitted
to the total time of activity. The power efficiency of the PCSS
199
scheme decreases with increasing number of laptops, which is a
consequence of the fixed checking intensities. Since the NAP has
to share its capacity among the laptops, with an increasing number
of laptops there will be an increasing number of checkpoints where
the NAP cannot show up. In such cases the dynamic checking intensity
adjustment procedure could help by decreasing checking
intensity on the NAP-laptop links. Recall that in the current scenario
we employed fixed checking intensities in order to satisfy the
mouse delay requirement. It is also important to notice that with the
uncoordinated UGSS scheme the activity ratio of a mouse is relatively
high, which is an important drawback considering the low
power capabilities of such devices.
7.3
Impact of Number of Forwarding Hops
In what follows, we investigate the performance impact of the number
of forwarding hops along the communication path in the scatternet
configuration shown in Figure 8. The configuration consists
of a chain of S/M forwarding nodes (
F
i
) and a certain number of
additional node pairs connected to each forwarding node in order to
generate background traffic. The number of S/M forwarding nodes
is denoted by
N
F
. There are
N
B
number of background node pairs
connected to each forwarding node as masters. The background
traffic flows from each source node
B
(S)
ij
to its destination pair
B
(D)
ij
through the corresponding forwarding node
F
i
. The traffic
that we are interested in is a bulk TCP data transfer between node
S and D. The background traffic is a CBR source, which generates
512 byte long IP packets with a period of length 0.05 sec.
D
B
(D)
1i
B
(D)
2i
B
(D)
NF i
B
(S)
NF i
B
(S)
2i
B
(S)
1i
F
1
F
2
F
NF
S
Figure 8: Impact of number of forwarding nodes
During the simulations we vary the number of forwarding hops
N
F
and the number of background node pairs
N
B
connected to each
forwarding node. As one would expect, with increasing number of
forwarding hops and background node pairs the coordinated algorithms
will perform significantly better than the one without any
coordination (UGSS).
The throughput of the S-D traffic as a function of the number of
forwarding nodes (
N
F
) without background traffic (
N
B
= 0) and
with two pairs of background nodes (
N
B
= 2) are shown in Figure
9 (a) and (b), respectively. The throughput in case of no cross
traffic drops roughly by half when we introduce the first forwarding
node. Adding additional forwarding hops continuously reduces
the throughput, however, the decrease at each step is less drasti-cal
. We note that in case of the ICSS scheme one would expect
that for
N
F
> 1 the throughput should not decrease by adding
additional forwarding hops. However, there are a number of other
effects besides the number of forwarding hops that decrease the
throughput. For instance, with an increasing number of forwarding
hops the number of piconets in the same area increases, which,
in turn, causes an increasing number of lost packets over the radio
interface due to frequency collisions. Furthermore with increasing
number of hops the end-to-end delay suffered by the TCP flow increases
, which makes the TCP connection less reactive to recover
from packet losses.
In the no background traffic case the PCSS scheme performs close
to the UGSS algorithm in terms of throughput. However, as we
introduce two pairs of background nodes the UGSS algorithm fails
completely, while the PCSS scheme still achieves approximately 20
kbit/sec throughput. Furthermore, the power efficiency of the PCSS
scheme is an order of magnitude higher than that of the UGSS algorithm
in both cases, which indicates that the PCSS algorithm consumes
significantly less power to transmit the same amount of data
than the UGSS scheme.
7.4
Impact of Bridging Degree
Next we investigate the performance of scheduling algorithms as
the number of piconets that a bridging node participates in is increased
. The scatternet setup that we consider is shown in Figure
10, where we are interested in the performance of the bridging node
C. Node C is an all slave bridging node and it is connected to master
nodes
P
i
, where the number of these master nodes is denoted
by
N
P
. To each master node
P
i
we connect
N
L
number of leaf
nodes as slaves in order to generate additional background load in
the piconets. We introduce bulk TCP data transfer from node C
towards each of its master node
P
i
and CBR background traffic
on each
L
ij
- P
i
link. The packet generation interval for background
sources was set to 0.25 sec, which corresponds to a 16
kbit/sec stream. During the simulation we vary the number of piconets
N
P
participated by node C and investigate the performance
of the PCSS algorithm with and without dynamic checkpoint intensity
changes. The number of background nodes
N
L
connected to
each master node
P
i
was set to
N
L
= 3 and it was kept constant in
the simulations.
C
P
1
L
1i
L
1N
L
L
N
P
1
L
N
P
N
L
P
N
P
Figure 10: Impact of number of participated piconets
The throughputs of TCP flows between node C and each
P
i
are averaged
and it is shown in Figure 10 (a). The sum of TCP throughputs
are plotted in graph (b) and the power efficiency of the central
node is shown in graph (c). The PCSS algorithm has been tested
both with fixed base checking periods equal to 32 frames ("PCSS-32"
) and with dynamic checking intensity changes as well ("PCSS-dyn"
). The parameter settings of the dynamic case is shown in Table
1.
q
uti
= 0.7
N
sample,min
= 4
lower
= 0.3
upper
= 0.7
q
(node)
uti
= 0.7
N
uti,win
= 10
(node)
max
= 0.8
T
min
= 8
T
max
= 256
Table 1: Parameter setting of the dynamic PCSS scheme
200
0
50
100
150
200
250
300
350
400
450
500
1
2
3
4
5
6
7
Throughput [kbit/s]
Number of laptops
TCP throughput per laptop
PCSS
UGSS
ICSS
0
50
100
150
200
250
300
350
400
450
500
1
2
3
4
5
6
7
Throughput [kbit/s]
Number of laptops
Sum TCP throughput of laptops
PCSS
UGSS
ICSS
0
0.01
0.02
0.03
0.04
0.05
0.06
1
2
3
4
5
6
7
Delay [sec]
Number of laptops
0.99 percentile of mouse dealy
PCSS
UGSS
ICSS
(a)
(b)
(c)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
2
3
4
5
6
7
Activity ratio
Number of laptops
Activity Ratio of laptops
PCSS
UGSS
ICSS
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
2
3
4
5
6
7
Activity ratio
Number of laptops
Activity Ratio of mice
PCSS
UGSS
ICSS
0
100
200
300
400
500
600
1
2
3
4
5
6
7
kbit/Energy unit
Number of laptops
Power efficiency of laptops
PCSS
UGSS
ICSS
(d)
(e)
(f)
Figure 7: Throughput, delay and power measures in the function of number of laptops connected to the NAP
0
50
100
150
200
250
300
350
400
450
500
0
1
2
3
4
5
6
7
8
TCP throughput [kbit/s]
Number of forwarding nodes (N_F)
TCP throughput without background nodes (N_B=0)
PCSS
UGSS
ICSS
0
50
100
150
200
250
300
350
400
450
500
0
1
2
3
4
5
6
7
8
TCP throughput [kbit/s]
Number of forwarding nodes (N_F)
TCP throughput with 2 pairs of background nodes (N_B=2)
PCSS
UGSS
ICSS
0
100
200
300
400
500
600
0
1
2
3
4
5
6
7
8
Power Efficiency [kbit/Energy unit]
Number of forwarding nodes (N_F)
Power efficiency of forwarding nodes (N_B=2)
PCSS
UGSS
ICSS
(a)
(b)
(c)
Figure 9: Throughput and power efficiency in function of number of forwarding hops
It is important to notice that the per flow TCP throughputs in case
of the dynamic PCSS scheme matches quite closely the throughput
achieved by the ICSS algorithm and it significantly exceeds the
throughput that has been achieved by the fixed PCSS. This large
difference is due to the relatively low background traffic in the
neighbouring piconets of node
C, in which case the dynamic PCSS
automatically reduces checkpoint intensity on the lightly loaded
links and allocates more bandwidth to the highly loaded ones by
increasing checking intensity.
CONCLUSIONS
We have presented Pseudo Random Coordinated Scatternet
Scheduling, an algorithm that can efficiently control communication
in Bluetooth scatternets without exchange of control information
between Bluetooth devices. The algorithm relies on two key
components, namely the use of pseudo random sequences of meeting
points, that eliminate systematic collisions, and a set of rules
that govern the increase and decrease of meeting point intensity
without explicit coordination.
We have evaluated the performance of PCSS in a number of simulation
scenarios, where we have compared throughput and power
measures achieved by PCSS to those achieved by two reference
schedulers.
The first reference scheduler is an uncoordinated
greedy algorithm, while the other is a hypothetical "ideal" scheduler
.
In all the scenarios investigated we have found that PCSS achieves
higher throughput than the uncoordinated reference algorithm.
Moreover, with the traffic dependent meeting point intensity adjustments
the throughput and power measures of PCSS quite closely
match the results of the "ideal" reference algorithm. At the same
time PCSS consumes approximately the same amount of power as
the ideal scheduler to achieve the same throughput, which is significantly
less than the power consumption of the uncoordinated
reference scheduler.
REFERENCES
[1] Bluetooth Special Interest Group. Bluetooth Baseband
Specification Version 1.0 B. http://www.bluetooth.com/.
201
0
50
100
150
200
250
300
350
400
450
1
2
3
4
5
6
Throughput [kbit/s]
Number of piconets participated by the central node (N_P)
Averaged TCP throughput between central node and master nodes
PCSS-32
PCSS-dyn
UGSS
ICSS
0
50
100
150
200
250
300
350
400
450
1
2
3
4
5
6
Throughput [kbit/s]
Number of piconets participated by the central node (N_P)
Sum of TCP throughputs at the central node
PCSS-32
PCSS-dyn
UGSS
ICSS
0
100
200
300
400
500
600
1
2
3
4
5
6
Power efficiency [kbit/Energy unit]
Number of piconets participated by the central node (N_P)
Effective power of central node
PCSS-32
PCSS-dyn
UGSS
ICSS
(a)
(b)
(c)
Figure 11: Throughput and power efficiency in function of the bridging degree of node C
[2] Bluetooth Special Interest Group.
http://www.bluetooth.com/.
[3] J. Haartsen. BLUETOOTH- the universal radio interface for
ad-hoc, wireless connectivity. Ericsson Review, (3), 1998.
[4] Z. Haraszti, I. Dahlquist, A. Farago, and T. Henk. Plasma an
integrated tool for ATM network operation. In Proc.
International Switching Symposium, 1995.
[5] N. Johansson, U. Korner, and P. Johansson. Performance
evaluation of scheduling algorithms for Bluetooth. In IFIP
TC6 WG6.2 Fifth International Conference on Broadband
Communications (BC'99), Hong Kong, November 1999.
[6] N. Johansson, U. Korner, and L. Tassiulas. A distributed
scheduling algorithm for a Bluetooth scatternet. In Proc. of
The Seventeenth International Teletraffic Congress, ITC'17,
Salvador da Bahia, Brazil, September 2001.
[7] P. Johansson, N. Johansson, U. Korner, J. Elgg, and
G. Svennarp. Short range radio based ad hoc networking:
Performance and properties. In Proc. of ICC'99, Vancouver,
1999.
[8] M. Kalia, D. Bansal, and R. Shorey. MAC scheduling and
SAR policies for Bluetooth: A master driven TDD
pico-cellular wireless system. In IEEE Mobile Multimedia
Communications Conference MOMUC'99, San Diego,
November 1999.
[9] M. Kalia, D. Bansal, and R. Shorey. MAC scheduling
policies for power optimization in Bluetooth: A master
driven TDD wireless system. In IEEE Vehicular Technology
Conference 2000, Tokyo, 2000.
[10] M. Kalia, S. Garg, and R. Shorey. Efficient policies for
increasing capacity in Bluetooth: An indoor pico-cellular
wireless system. In IEEE Vehicular Technology Conference
2000, Tokyo, 2000.
APPENDIX
Here, we present the procedure for generating the pseudo random
sequence of checkpoints, where we reuse the elements of
the pseudo random frequency hop generation procedure available
in Bluetooth. The inputs to the checkpoint generation procedure
P seudoChkGen are the current checking period T
(i)
check
, the Bluetooth
MAC address of the slave
A
slave
and the current value of the
master's clock
t
(i)
. A node can perform checkpoint generation using
the
P seudoChkGen procedure at any point in time, it is always
guaranteed that the position of checkpoint generated by the
two nodes will be the same, as it has been pointed out in Section
5.1. Nevertheless the typical case will be that whenever a node arrives
to a checkpoint it generates the position of the next checkpoint
on the given link. The variable
t
(i)
check
always stores the master's
clock at the next checkpoint, thus it needs to be updated every time
a checkpoint is passed. Here we note that the Bluetooth clock of a
device is a 28 bit counter, where the LSB changes at every half slot.
Let us assume that the base period of checkpoints on the
i
th
link of
the node is
T
(i)
check
= 2
j-2
,
j > 2 number of frames, which means
that there is one pseudo randomly positioned checkpoint in each
consecutive time interval of length
T
(i)
check
and the
j
th
bit of the
Bluetooth clock changes at every
T
(i)
check
. Upon arrival to a checkpoint
the variable
t
(i)
check
equals to the current value of the master's
clock on that link. After the checkpoint generation procedure has
been executed the variable
t
(i)
check
will store the master's clock at
the time of the next checkpoint on that link.
Before starting the procedure the variable
t
(i)
check
is set to the current
value of the master's clock
t
(i)
in order to cover the general
case when at the time of generating the next checkpoint the value
of
t
(i)
check
does not necessarily equals to the current value of the
master's clock
t
(i)
. The position of the next checkpoint is obtained
such that the node first adds the current value of
T
(i)
check
to the variable
t
(i)
check
, clears the bits
[j - 1, . . . , 0] of t
(i)
check
and
then generates the bits
[j - 1, . . . , 2] one by one using the procedure
P seudoBitGen(X, W
ctrl
). When generating the k
th
bit
(
j -1 k 2) the clock bits X = t
(i)
check
[k+1, . . . , k+5] are fed
as inputs to the
P seudoBitGen procedure, while the control word
W
ctrl
is derived from
t
(i)
check
including the bits already generated
and from the MAC address of the slave
A
slave
. The schematic view
of generating the clock bits of the next checkpoint is illustrated in
Figure 12.
W
27.
X
PseudoBitGen
ctrl
k.
k+5.
k+1.
28.
0.
1.
2.
Figure 12: Generating the clock bits of the next checkpoint
202
The
P seudoBitGen procedure is based on the pseudo random
scheme used for frequency hop selection in Bluetooth.
However
, before presenting the
P seudoBitGen procedure we give the
pseudo-code of the
P seudoChkGen procedure.
PseudoChkGen procedure:
t
(i)
: the current value of the master's clock;
T
(i)
check
= 2
j-2
, j > 2: current length of the base checkperiod
in terms of number of frames.
t
(i)
check
= t
(i)
;
t
(i)
check
[j - 1, . . . , 0] = 0;
t
(i)
check
= t
(i)
check
+ T
(i)
check
;
k = j - 1;
while (
k 2)
X[0, . . . , 4] = t
(i)
check
[k + 1, . . . , k + 5];
t
(i)
check
[k] = P seudoBitGen(X, W
ctrl
);
k=k-1;
end
Finally, we discuss the
P seudoBitGen procedure, which is illustrated
in Figure 13.
5
A
Y
5
X
5
O
1
5
B
Z
5
PERM5
5
C
9
D
V
5
V[k mod 5]
bit selector
X
O
R
Add
mod 32
Figure 13: The PseudoBitGen procedure
The
control
words
of
the
P seudoBitGen
procedure
W
ctrl
= {A, B, C, D} are the same as the control words of
the frequency hop selection scheme in Bluetooth and they are
shown in Table 2.
However, the input
X and the additional
bit selection operator at the end are different.
As it has been
discussed above the input
X is changing depending on which
bit of the checkpoint is going to be generated.
When generating
the
k
th
clock bit of the next checkpoint the clock bits
X = t
(i)
check
[k + 1, . . . , k + 5] are fed as inputs and the bit
selection operator at the end selects the
(k mod 5)
th
bit of the 5
bits long output
V .
A
A
slave
[27 - 23] t
(i)
check
[25 - 21]
B
B[0 - 3] = A
slave
[22 - 19], B[4] = 0
C
A
slave
[8, 6, 4, 2, 0] t
(i)
check
[20 - 16]
D
A
slave
[18 - 10] t
(i)
check
[15 - 7]
Table 2: Control words
The operation PERM5 is a butterfly permutation, which is the
same as in the frequency hop selection scheme of Bluetooth and
it is described in Figure 14. Each bit of the control word
P is
associated with a given bit exchange in the input word. If the
given bit of the control word equals to 1 the corresponding bit exchange
is performed otherwise skipped. The control word
P is
obtained from
C and D, such that P [i] = D[i], i = 0 . . . 8 and
P [j + 9] = C[j], j = 0 . . . 4.
Z[1]
Z[2]
Z[3]
Z[4]
Z[0]
P[11,10]
P[13,12]
P[9,8]
P[7,6]
P[5,4]
P[3,2]
P[1,0]
Figure 14: Butterfly permutation
203 | checkpoint;total utilization;piconets;threshold;scatternet;PCSS algorithm;Bluetooth;slaves;inter-piconet communication;scheduling;intensity;Network Access Point;bridging unit |
170 | Run-Time Dynamic Linking for Reprogramming Wireless Sensor Networks | From experience with wireless sensor networks it has become apparent that dynamic reprogramming of the sensor nodes is a useful feature. The resource constraints in terms of energy, memory, and processing power make sensor network reprogramming a challenging task. Many different mechanisms for reprogramming sensor nodes have been developed ranging from full image replacement to virtual machines. We have implemented an in-situ run-time dynamic linker and loader that use the standard ELF object file format. We show that run-time dynamic linking is an effective method for reprogramming even resource constrained wireless sensor nodes. To evaluate our dynamic linking mechanism we have implemented an application-specific virtual machine and a Java virtual machine and compare the energy cost of the different linking and execution models. We measure the energy consumption and execution time overhead on real hardware to quantify the energy costs for dynamic linking. Our results suggest that while in general the overhead of a virtual machine is high, a combination of native code and virtual machine code provide good energy efficiency. Dynamic run-time linking can be used to update the native code, even in heterogeneous networks. | Introduction
Wireless sensor networks consist of a collection of programmable
radio-equipped embedded systems. The behavior
of a wireless sensor network is encoded in software running
on the wireless sensor network nodes. The software in
deployed wireless sensor network systems often needs to be
changed, both to update the system with new functionality
and to correct software bugs. For this reason dynamically
reprogramming of wireless sensor network is an important
feature. Furthermore, when developing software for wireless
sensor networks, being able to update the software of a
running sensor network greatly helps to shorten the development
time.
The limitations of communication bandwidth, the limited
energy of the sensor nodes, the limited sensor node memory
which typically is on the order of a few thousand bytes large,
the absence of memory mapping hardware, and the limited
processing power make reprogramming of sensor network
nodes challenging.
Many different methods for reprogramming sensor nodes
have been developed, including full system image replacement
[14, 16], approaches based on binary differences [15,
17, 31], virtual machines [18, 19, 20], and loadable native
code modules in the first versions of Contiki [5] and
SOS [12]. These methods are either inefficient in terms of
energy or require non-standard data formats and tools.
The primary contribution of this paper is that we investigate
the use of standard mechanisms and file formats for
reprogramming sensor network nodes. We show that in-situ
dynamic run-time linking and loading of native code using
the ELF file format, which is a standard feature on many operating
systems for PC computers and workstations, is feasible
even for resource-constrained sensor nodes. Our secondary
contribution is that we measure and quantify the energy
costs of dynamic linking and execution of native code
and compare it to the energy cost of transmission and execution
of code for two virtual machines: an application-specific
virtual machine and the Java virtual machine.
We have implemented a dynamic linker in the Contiki operating
system that can link, relocate, and load standard ELF
object code files. Our mechanism is independent of the particular
microprocessor architecture on the sensor nodes and
we have ported the linker to two different sensor node platforms
with only minor modifications to the architecture dependent
module of the code.
To evaluate the energy costs of the dynamic linker we implement
an application specific virtual machine for Contiki
together with a compiler for a subset of Java. We also adapt
the Java virtual machine from the lejOS system [8] to run under
Contiki. We measure the energy cost of reprogramming
and executing a set of program using dynamic linking of native
code and the two virtual machines. Using the measurements
and a simple energy consumption model we calculate
break-even points for the energy consumption of the different
mechanisms. Our results suggest that while the execution
time overhead of a virtual machine is high, a combination of
native code and virtual machine code may give good energy
efficiency.
The remainder of this paper is structured as follows. In
Section 2 we discuss different scenarios in which reprogramming
is useful. Section 3 presents a set of mechanisms for
executing code inside a sensor node and in Section 4 we discuss
loadable modules and the process of linking, relocating
, and loading native code. Section 5 describes our implementation
of dynamic linking and our virtual machines. Our
experiments and the results are presented in Section 6 and
discuss the results in Section 7. Related work is reviewed in
Section 8. Finally, we conclude the paper in Section 9.
Scenarios for Software Updates
Software updates for sensor networks are necessary for a
variety of reasons ranging from implementation and testing
of new features of an existing program to complete reprogramming
of sensor nodes when installing new applications.
In this section we review a set of typical reprogramming scenarios
and compare their qualitative properties.
2.1
Software Development
Software development is an iterative process where code
is written, installed, tested, and debugged in a cyclic fashion
. Being able to dynamically reprogram parts of the sensor
network system helps shorten the time of the development
cycle. During the development cycle developers typically
change only one part of the system, possibly only a single
algorithm or a function. A sensor network used for software
development may therefore see large amounts of small
changes to its code.
2.2
Sensor Network Testbeds
Sensor network testbeds are an important tool for development
and experimentation with sensor network applications
. New applications can be tested in a realistic setting
and important measurements can be obtained [36]. When a
new application is to be tested in a testbed the application
typically is installed in the entire network. The application
is then run for a specified time, while measurements are collected
both from the sensors on the sensor nodes, and from
network traffic.
For testbeds that are powered from a continuous energy
source, the energy consumption of software updates is only
of secondary importance. Instead, qualitative properties such
as ease of use and flexibility of the software update mechanism
are more important. Since the time required to make an
update is important, the throughput of a network-wide software
update is of importance. As the size of the transmitted
binaries impact the throughput, the binary size still can be
Update
Update
Update
Program
Scenario
frequency
fraction
level
longevity
Development
Often
Small
All
Short
Testbeds
Seldom
Large
All
Long
Bug fixes
Seldom
Small
All
Long
Reconfig.
Seldom
Small
App
Long
Dynamic
Application
Often
Small
App
Long
Table 1. Qualitative comparison between different reprogramming
scenarios.
used as an evaluation metric for systems where throughput is
more important than energy consumption.
2.3
Correction of Software Bugs
The need for correcting software bugs in sensor networks
was early identified [7]. Even after careful testing, new bugs
can occur in deployed sensor networks caused by, for example
, an unexpected combination of inputs or variable link
connectivity that stimulate untested control paths in the communication
software [30].
Software bugs can occur at any level of the system. To
correct bugs it must therefore be possible to reprogram all
parts of the system.
2.4
Application Reconfiguration
In an already installed sensor network, the application
may need to be reconfigured. This includes change of parameters
, or small changes in the application such as changing
from absolute temperature readings to notification when
thresholds are exceeded [26]. Even though reconfiguration
not necessarily include software updates [25], application reconfiguration
can be done by reprogramming the application
software. Hence software updates can be used in an application
reconfiguration scenario.
2.5
Dynamic Applications
There are many situations where it is useful to replace the
application software of an already deployed sensor network.
One example is the forest fire detection scenario presented by
Fok et al. [9] where a sensor network is used to detect a fire.
When the fire detection application has detected a fire, the
fire fighters might want to run a search and rescue application
as well as a fire tracking application. While it may possible
to host these particular applications on each node despite
the limited memory of the sensor nodes, this approach is not
scalable [9]. In this scenario, replacing the application on the
sensor nodes leads to a more scalable system.
2.6
Summary
Table 1 compares the different scenarios and their properties
. Update fraction refers to what amount of the system
that needs to be updated for every update, update level to
at what levels of the system updates are likely to occur, and
program longevity to how long an installed program will be
expected to reside on the sensor node.
Code Execution Models and Reprogramming
Many different execution models and environments have
been developed or adapted to run on wireless sensor nodes.
16
Some with the notion of facilitating programming [1], others
motivated by the potential of saving energy costs for reprogramming
enabled by the compact code representation of
virtual machines [19]. The choice of the execution model
directly impacts the data format and size of the data that
needs to be transported to a node. In this section we discuss
three different mechanisms for executing program code
inside each sensor node: script languages, virtual machines,
and native code.
3.1
Script Languages
There are many examples of script languages for embedded
systems, including BASIC variants, Python interpreters
[22], and TCL machines [1]. However, most script
interpreters target platforms with much more resources than
our target platforms and we have therefore not included them
in our comparison.
3.2
Virtual Machines
Virtual machines are a common approach to reduce the
cost of transmitting program code in situations where the
cost of distributing a program is high. Typically, program
code for a virtual machine can be made more compact than
the program code for the physical machine. For this reason
virtual machines are often used for programming sensor networks
[18, 19, 20, 23].
While many virtual machines such as the Java virtual machine
are generic enough to perform well for a variety of
different types of programs, most virtual machines for sensor
networks are designed to be highly configurable in order
to allow the virtual machine to be tailored for specific applications
. In effect, this means that parts of the application
code is implemented as virtual machine code running on the
virtual machine, and other parts of the application code is implemented
in native code that can be used from the programs
running on the virtual machine.
3.3
Native Code
The most straightforward way to execute code on sensor
nodes is by running native code that is executed directly by
the microcontroller of the sensor node. Installing new native
code on a sensor node is more complex than installing code
for a virtual machine because the native code uses physical
addresses which typically need to be updated before the program
can be executed. In this section we discuss two widely
used mechanisms for reprogramming sensor nodes that execute
native code: full image replacement and approaches
based on binary differences.
3.3.1
Full Image Replacement
The most common way to update software in embedded
systems and sensor networks is to compile a complete new
binary image of the software together with the operating system
and overwrite the existing system image of the sensor
node. This is the default method used by the XNP and Deluge
network reprogramming software in TinyOS [13].
The full image replacement does not require any additional
processing of the loaded system image before it is
loaded into the system, since the loaded image resides at the
same, known, physical memory address as the previous system
image. For some systems, such as the Scatterweb system
code [33], the system contains both an operating system image
and a small set of functions that provide functionality
for loading new operating system images. A new operating
system image can overwrite the existing image without overwriting
the loading functions. The addresses of the loading
functions are hard-coded in the operating system image.
3.3.2
Diff-based Approaches
Often a small update in the code of the system, such as
a bugfix, will cause only minor differences between in the
new and old system image. Instead of distributing a new
full system image the binary differences, deltas, between the
modified and original binary can be distributed. This reduces
the amount of data that needs to be transferred. Several types
of diff-based approaches have been developed [15, 17, 31]
and it has been shown that the size of the deltas produced by
the diff-based approaches is very small compared to the full
binary image.
Loadable Modules
A less common alternative to full image replacement and
diff-based approaches is to use loadable modules to perform
reprogramming. With loadable modules, only parts of
the system need to be modified when a single program is
changed. Typically, loadable modules require support from
the operating system. Contiki and SOS are examples of systems
that support loadable modules and TinyOS is an example
of an operating system without loadable module support.
A loadable module contains the native machine code of
the program that is to be loaded into the system. The machine
code in the module usually contains references to functions
or variables in the system. These references must be
resolved to the physical address of the functions or variables
before the machine code can be executed. The process of
resolving those references is called linking. Linking can be
done either when the module is compiled or when the module
is loaded. We call the former approach pre-linking and
the latter dynamic linking. A pre-linked module contains
the absolute physical addresses of the referenced functions
or variables whereas a dynamically linked module contains
the symbolic names of all system core functions or variables
that are referenced in the module. This information increases
the size of the dynamically linked module compared to the
pre-linked module. The difference is shown in Figure 1. Dynamic
linking has not previously been considered for wireless
sensor networks because of the perceived run-time overhead
, both in terms of execution time, energy consumption,
and memory requirements.
The machine code in the module usually contains references
not only to functions or variables in the system, but
also to functions or variables within the module itself. The
physical address of those functions will change depending
on the memory address at which the module is loaded in the
system. The addresses of the references must therefore be
updated to the physical address that the function or variable
will have when the module is loaded. The process of updating
these references is known as relocation. Like linking,
relocation can be done either at compile-time or at run-time.
When a module has been linked and relocated the program
loader loads the module into the system by copying the
17
memcpy
/* ... */
}
void radio_send() {
/* ... */
}
0x0237
0x1720
Core
memcpy();
radio_send();
call 0x1720
call 0x0237
Module with dynamic linking information
Pre-linked module
memcpy();
radio_send();
call 0x0000
call 0x0000
call instruction
call instruction
radio_send
int memcpy() {
Figure 1. The difference between a pre-linked module
and a module with dynamic linking information: the pre-linked
module contains physical addresses whereas the
dynamically linked module contains symbolic names.
linked and relocated native code into a place in memory from
where the program can be executed.
4.1
Pre-linked Modules
The machine code of a pre-linked module contains absolute
addresses of all functions and variables in the system
code that are referenced by the module. Linking of the module
is done at compile time and only relocation is performed
at run-time. To link a pre-linked module, information about
the physical addresses of all functions and variables in the
system into which the module is to be loaded must be available
at compile time.
There are two benefits of pre-linked modules over dynamically
linked modules. First, pre-linked modules are smaller
than dynamically linked modules which results in less information
to be transmitted. Second, the process of loading a
pre-linked module into the system is less complex than the
process of linking a dynamically linked module. However,
the fact that all physical addresses of the system core are
hard-coded in the pre-linked module is a severe drawback as
a pre-linked module can only be loaded into a system with
the exact same physical addresses as the system that was to
generate the list of addresses that was used for linking the
module.
In the original Contiki system [5] we used pre-linked binary
modules for dynamic loading. When compiling the
Contiki system core, the compiler generated a map file containing
the mapping between all globally visible functions
and variables in the system core and their addresses. This
list of addresses was used to pre-link Contiki modules.
We quickly noticed that while pre-linked binary modules
worked well for small projects with a homogeneous set
of sensor nodes, the system quickly became unmanageable
when the number of sensor nodes grew. Even a small change
to the system core of one of the sensor nodes would make it
impossible to load binary a module into the system bedcase
the addresses of variables and functions in the core were different
from when the program was linked. We used version
numbers to guard against this situation. Version numbers did
help against system crashes, but did not solve the general
problem: new modules could not be loaded into the system.
4.2
Dynamic Linking
With dynamic linking, the object files do not only contain
code and data, but also names of functions are variables
of the system core that are referenced by the module. The
code in the object file cannot be executed before the physical
addresses of the referenced variables and functions have
been filled in. This process is done at run time by a dynamic
linker.
In the Contiki dynamic linker we use two file formats for
the dynamically linked modules, ELF and Compact ELF.
4.2.1
ELF - Executable and Linkable Format
One of the most common object code format for dynamic
linking is the Executable and Linkable Format (ELF) [3]. It
is a standard format for object files and executables that is
used for most modern Unix-like systems. An ELF object
file include both program code and data and additional information
such as a symbol table, the names of all external
unresolved symbols, and relocation tables. The relocation
tables are used to locate the program code and data at other
places in memory than for which the object code originally
was assembled. Additionally, ELF files can hold debugging
information such as the line numbers corresponding to specific
machine code instructions, and file names of the source
files used when producing the ELF object.
ELF is also the default object file format produced by the
GCC utilities and for this reason there are a number of standard
software utilities for manipulating ELF files available.
Examples include debuggers, linkers, converters, and programs
for calculating program code and data memory sizes.
These utilities exist for a wide variety of platforms, including
MS Windows, Linux, Solaris, and FreeBSD. This is a clear
advantage over other solutions such as FlexCup [27], which
require specialized utilities and tools.
Our dynamic linker in Contiki understands the ELF format
and is able to perform dynamic linking, relocation, and
loading of ELF object code files. The debugging features of
the ELF format are not used.
4.2.2
CELF - Compact ELF
One problem with the ELF format is the overhead in terms
of bytes to be transmitted across the network, compared to
pre-linked modules. There are a number of reasons for the
extra overhead. First, ELF, as any dynamically relocatable
file format, includes the symbolic names of all referenced
functions or variables that need to be linked at run-time. Second
, and more important, the ELF format is designed to work
on 32-bit and 64-bit architectures. This causes all ELF data
structures to be defined with 32-bit data types. For 8-bit or
16-bit targets the high 16 bits of these fields are unused.
To quantify the overhead of the ELF format we devise an
alternative to the ELF object code format that we call CELF
- Compact ELF. A CELF file contains the same information
as an ELF file, but represented with 8 and 16-bit datatypes.
18
CELF files typically are half the size of the corresponding
ELF file. The Contiki dynamic loader is able to load CELF
files and a utility program is used to convert ELF files to
CELF files.
It is possible to further compress CELF files using lossless
data compression. However, we leave the investigation of the
energy-efficiency of this approach to future work.
The drawback of the CELF format is that it requires a
special compressor utility is for creating the CELF files. This
makes the CELF format less attractive for use in many real-world
situations.
4.3
Position Independent Code
To avoid performing the relocation step when loading a
module, it is in some cases possible to compile the module
into position independent code. Position independent code is
a type of machine code which does not contain any absolute
addresses to itself, but only relative references. This is the
approach taken by the SOS system.
To generate position independent code compiler support
is needed. Furthermore, not all CPU architectures support
position independent code and even when supported, programs
compiled to position independent code typically are
subject to size restrictions. For example, the AVR microcontroller
supports position independent code but restricts the
size of programs to 4 kilobytes. For the MSP430 no compiler
is known to fully support position independent code.
Implementation
We have implemented run-time dynamic linking of ELF
and CELF files in the Contiki operating system [5]. To evaluate
dynamic linking we have implemented an application
specific virtual machine for Contiki together with a compiler
for a subset of Java, and have ported a Java virtual machine
to Contiki.
5.1
The Contiki Operating System
The Contiki operating system was the first operating system
for memory-constrained sensor nodes to support dynamic
run-time loading of native code modules. Contiki is
built around an event-driven kernel and has very low memory
requirements.
Contiki applications run as extremely
lightweight protothreads [6] that provide blocking operations
on top of the event-driven kernel at a very small memory
cost. Contiki is designed to be highly portable and has been
ported to over ten different platforms with different CPU architectures
and using different C compilers.
Loaded program
00000000000
11111111111
00000000000
00000000000
11111111111
11111111111
00000000000
00000000000
00000000000
11111111111
11111111111
11111111111
00000000000
00000000000
00000000000
11111111111
11111111111
11111111111
RAM
Core
Loaded program
Core
ROM
Device drivers
Contiki kernel
Contiki kernel
Dynamic linker
Symbol table
Language run-time
Device drivers
Figure 2. Partitioning in Contiki: the core and loadable
programs in RAM and ROM.
A Contiki system is divided into two parts: the core and
the loadable programs as shown in Figure 2. The core consists
of the Contiki kernel, device drivers, a set of standard
applications, parts of the C language library, and a symbol
table. Loadable programs are loaded on top of the core and
do not modify the core.
The core has no information about the loadable programs,
except for information that the loadable programs explicitly
register with the core. Loadable programs, on the other hand,
have full knowledge of the core and may freely call functions
and access variables that reside in the core. Loadable
programs can call each other by going through the kernel.
The kernel dispatches calls from one loaded program to another
by looking up the target program in an in-kernel list of
active processes. This one-way dependency makes it possible
to load and unload programs at run-time without needing
to patch the core and without the need for a reboot when a
module has been loaded or unloaded.
While it is possible to replace the core at run-time by running
a special loadable program that overwrites the current
core and reboots the system, experience has shown that this
feature is not often used in practice.
5.2
The Symbol Table
The Contiki core contains a table of the symbolic names
of all externally visible variable and function names in the
Contiki core and their corresponding addresses. The table
includes not only the Contiki system, but also the C language
run-time library. The symbol table is used by the dynamic
linker when linking loaded programs.
The symbol table is created when the Contiki core binary
image is compiled. Since the core must contain a correct
symbol table, and a correct symbol table cannot be created
before the core exists, a three-step process is required to
compile a core with a correct symbol table. First, an intermediary
core image with an empty symbol table is compiled.
From the intermediary core image an intermediary symbol
table is created. The intermediary symbol table contains the
correct symbols of the final core image, but the addresses
of the symbols are incorrect. Second, a second intermediary
core image that includes the intermediary symbol table
is created. This core image now contains a symbol table of
the same size as the one in the final core image so the addresses
of all symbols in the core are now as they will be
in the final core image. The final symbol table is then created
from the second intermediary core image. This symbol
table contains both the correct symbols and their correct addresses
. Third, the final core image with the correct symbol
table is compiled.
The process of creating a core image is automated through
a simple make script. The symbol table is created using a
combination of standard ELF tools.
For a typical Contiki system the symbol table contains
around 300 entries which amounts to approximately 4 kilobytes
of data stored in flash ROM.
5.3
The Dynamic Linker
We implemented a dynamic linker for Contiki that is designed
to link, relocate, and load either standard ELF files [3]
and CELF, Compact ELF, files. The dynamic linker reads
19
ELF/CELF files through the Contiki virtual filesystem interface
, CFS, which makes the dynamic linker unaware of the
physical location of the ELF/CELF file. Thus the linker can
operate on files stored either in RAM, on-chip flash ROM,
external EEPROM, or external ROM without modification.
Since all file access to the ELF/CELF file is made through
the CFS, the dynamic linker does not need to concern itself
with low-level filesystem details such as wear-leveling
or fragmentation [4] as this is better handled by the CFS.
The dynamic linker performs four steps to link, relocate
and load an ELF/CELF file. The dynamic linker first parses
the ELF/CELF file and extracts relevant information about
where in the ELF/CELF file the code, data, symbol table,
and relocation entries are stored. Second, memory for the
code and data is allocated from flash ROM and RAM, respectively
. Third, the code and data segments are linked and
relocated to their respective memory locations, and fourth,
the code is written to flash ROM and the data to RAM.
Currently, memory allocation for the loaded program is
done using a simple block allocation scheme. More sophisticated
allocation schemes will be investigated in the future.
5.3.1
Linking and Relocating
The relocation information in an ELF/CELF file consists
of a list of relocation entries. Each relocation entry corresponds
to an instruction or address in the code or data in the
module that needs to be updated with a new address. A relocation
entry contains a pointer to a symbol, such as a variable
name or a function name, a pointer to a place in the code or
data contained in the ELF/CELF file that needs to be updated
with the address of the symbol, and a relocation type
which specifies how the data or code should be updated. The
relocation types are different depending on the CPU architecture
. For the MSP430 there is only one single relocation
type, whereas the AVR has 19 different relocation types.
The dynamic linker processes a relocation entry at a time.
For each relocation entry, its symbol is looked up in the symbol
table in the core. If the symbol is found in the core's symbol
table, the address of the symbol is used to patch the code
or data to which the relocation entry points. The code or data
is patched in different ways depending on the relocation type
and on the CPU architecture.
If the symbol in the relocation entry was not found in the
symbol table of the core, the symbol table of the ELF/CELF
file itself is searched. If the symbol is found, the address that
the symbol will have when the program has been loaded is
calculated, and the code or data is patched in the same way
as if the symbol was found in the core symbol table.
Relocation entries may also be relative to the data, BSS,
or code segment in the ELF/CELF file. In that case no symbol
is associated with the relocation entry. For such entries
the dynamic linker calculates the address that the segment
will have when the program has been loaded, and uses that
address to patch the code or data.
5.3.2
Loading
When the linking and relocating is completed, the text and
data have been relocated to their final memory position. The
text segment is then written to flash ROM, at the location
that was previously allocated. The memory allocated for the
data and BSS segments are used as an intermediate storage
for transferring text segment data from the ELF/CELF file
before it is written to flash ROM. Finally, the memory allocated
for the BSS segment is cleared, and the contents of the
data segment is copied from the ELF/CELF file.
5.3.3
Executing the Loaded Program
When the dynamic linker has successfully loaded the code
and data segments, Contiki starts executing the program.
The loaded program may replace an already running Contiki
service. If the service that is to be replaced needs to pass
state to the newly loaded service, Contiki supports the allocation
of an external memory buffer for this purpose. However
, experience has shown that this mechanism has been
very scarcely used in practice and the mechanism is likely
to be removed in future versions of Contiki.
5.3.4
Portability
Since the ELF/CELF format is the same across different
platforms, we designed the Contiki dynamic linker to be easily
portable to new platforms. The loader is split into one
architecture specific part and one generic part. The generic
part parses the ELF/CELF file, finds the relevant sections of
the file, looks up symbols from the symbol table, and performs
the generic relocation logic. The architecture specific
part does only three things: allocates ROM and RAM, writes
the linked and relocated binary to flash ROM, and understands
the relocation types in order to modify machine code
instructions that need adjustment because of relocation.
5.3.5
Alternative Designs
The Contiki core symbol table contains all externally visible
symbols in the Contiki core. Many of the symbols may
never need to be accessed by loadable programs, thus causing
ROM overhead. An alternative design would be to let the
symbol table include only a handful of symbols, entry points,
that define the only ways for an application program to interact
with the core. This would lead to a smaller symbol table,
but would also require a detailed specification of which entry
points that should be included in the symbol table. The main
reason why we did not chose this design, however, is that
we wish to be able to replace modules at any level of the system
. For this reason, we chose to provide the same amount of
symbols to an application program as it would have, would
it have been compiled directly into the core. However, we
are continuing to investigate this alternative design for future
versions of the system.
5.4
The Java Virtual Machine
We ported the Java virtual machine (JVM) from lejOS [8],
a small operating system originally developed for the Lego
Mindstorms. The Lego Mindstorms are equipped with an
Hitachi H8 microcontroller with 32 kilobytes of RAM available
for user programs such as the JVM. The lejOS JVM
works within this constrained memory while featuring pre-emptive
threads, recursion, synchronization and exceptions.
The Contiki port required changes to the RAM-only model
of the lejOS JVM. To be able to run Java programs within the
2 kilobytes of RAM available on our hardware platform, Java
classes needs to be stored in flash ROM rather than in RAM.
The Contiki port stores the class descriptions including bytecode
in flash ROM memory. Static class data and class flags
that denote if classes have been initialized are stored in RAM
20
as well as object instances and execution stacks. The RAM
requirements for the Java part of typical sensor applications
are a few hundred bytes.
Java programs can call native code methods by declaring
native Java methods. The Java virtual machine dispatches
calls to native methods to native code. Any native function
in Contiki may be called, including services that are part of
a loaded Contiki program.
5.5
CVM - the Contiki Virtual Machine
We designed the Contiki Virtual Machine, CVS, to be a
compromise between an application-specific and a generic
virtual machine. CVM can be configured for the application
running on top of the machine by allowing functions to be
either implemented as native code or as CVM code. To be
able to run the same programs for the Java VM and for CVM,
we developed a compiler that compiles a subset of the Java
language to CVM bytecode.
The design of CVM is intentionally similar to other virtual
machines, including Mate [19], VM [18], and the Java
virtual machine. CVM is a stack-based machine with sepa-rated
code and data areas. The CVM instruction set contains
integer arithmetic, unconditional and conditional branches,
and method invocation instructions. Method invocation can
be done in two ways, either by invocation of CVM bytecode
functions, or by invocation of functions implemented in native
code. Invocation of native functions is done through a
special instruction for calling native code. This instruction
takes one parameter, which identifies the native function that
is to be called. The native function identifiers are defined at
compile time by the user that compiles a list of native functions
that the CVM program should be able to call. With the
native function interface, it is possible for a CVM program to
call any native functions provided by the underlying system,
including services provided by loadable programs.
Native functions in a CVM program are invoked like any
other function. The CVM compiler uses the list of native
functions to translate calls to such functions into the special
instruction for calling native code. Parameters are passed to
native functions through the CVM stack.
Evaluation
To evaluate dynamic linking of native code we compare
the energy costs of transferring, linking, relocating, loading,
and executing a native code module in ELF format using dynamic
linking with the energy costs of transferring, loading,
and executing the same program compiled for the CVM and
the Java virtual machine. We devise a simple model of the
energy consumption of the reprogramming process. Thereafter
we experimentally quantify the energy and memory
consumption as well as the execution overhead for the reprogramming
, the execution methods and the applications. We
use the results of the measurements as input into the model
which enables us to perform a quantitative comparison of the
energy-efficiency of the reprogramming methods.
We use the ESB board [33] and the Telos Sky board [29]
as our experimental platforms. The ESB is equipped with an
MSP430 microcontroller with 2 kilobytes of RAM and 60
kilobytes of flash ROM, an external 64 kilobyte EEPROM,
as well as a set of sensors and a TR1001 radio transceiver.
PROCESS_THREAD(test_blink, ev, data)
{
static struct etimer t;
PROCESS_BEGIN();
etimer_set(&t, CLOCK_SECOND);
while(1) {
leds_on(LEDS_GREEN);
PROCESS_WAIT_UNTIL(etimer_expired(&t));
etimer_reset(&t);
leds_off(LEDS_GREEN);
PROCESS_WAIT_UNTIL(etimer_expired(&t));
etimer_reset(&t);
}
PROCESS_END();
}
Figure 3.
Example Contiki program that toggles the
LEDs every second.
The Telos Sky is equipped with an MSP430 microcontroller
with 10 kilobytes of RAM and 48 kilobytes of flash ROM
together with a CC2420 radio transceiver. We use the ESB to
measure the energy of receiving, storing, linking, relocating,
loading and executing loadable modules and the Telos Sky
to measure the energy of receiving loadable modules.
We use three Contiki programs to measure the energy efficiency
and execution overhead of our different approaches.
Blinker, the first of the two programs, is shown in Figure 3.
It is a simple program that toggles the LEDs every second.
The second program, Object Tracker, is an object tracking
application based on abstract regions [35]. To allow running
the programs both as native code, as CVM code, and
as Java code we have implemented these programs both in C
and Java. A schematic illustration of the C implementation
is in Figure 4. To support the object tracker program, we
implemented a subset of the abstract regions mechanism in
Contiki. The Java and CVM versions of the program call native
code versions of the abstract regions functions. The third
program is a simple 8 by 8 vector convolution calculation.
6.1
Energy Consumption
We model the energy consumption E of the reprogramming
process with
E
= E
p
+ E
s
+ E
l
+ E
f
where E
p
is the energy spent in transferring the object over
the network, E
s
the energy cost of storing the object on the
device, E
l
the energy consumed by linking and relocating the
object, and E
f
the required energy for of storing the linked
program in flash ROM. We use a simplified model of the
network propagation energy where we assume a propagation
protocol where the energy consumption E
p
is proportional to
the size of the object to be transferred. Formally,
E
p
= P
p
s
o
where s
o
is the size of the object file to be transfered and P
p
is a constant scale factor that depends on the network protocol
used to transfer the object. We use similar equations for
E
s
(energy for storing the binary) and E
l
(energy for linking
and relocating). The equation for E
f
(the energy for load-21
PROCESS_THREAD(use_regions_process, ev, data)
{
PROCESS_BEGIN();
while(1) {
value = pir_sensor.value();
region_put(reading_key, value);
region_put(reg_x_key, value * loc_x());
region_put(reg_y_key, value * loc_y());
if(value > threshold) {
max = region_max(reading_key);
if(max == value) {
sum = region_sum(reading_key);
sum_x = region_sum(reg_x_key);
sum_y = region_sum(reg_y_key);
centroid_x = sum_x / sum;
centroid_y = sum_y / sum;
send(centroid_x, centroid_y);
}
}
etimer_set(&t, PERIODIC_DELAY);
PROCESS_WAIT_UNTIL(etimer_expired(&t));
}
PROCESS_END();
}
Figure 4. Schematic implementation of an object tracker
based on abstract regions.
ing the binary to ROM) contains the size of the compiled
code size of the program instead of the size of the object file.
This model is intentionally simple and we consider it good
enough for our purpose of comparing the energy-efficiency
of different reprogramming schemes.
6.1.1
Lower Bounds on Radio Reception Energy
We measured the energy consumption of receiving data
over the radio for two different radio transceivers: the
TR1001 [32], that is used on the ESB board, and the
CC2420 [2], that conforms to the IEEE 802.15.4 standard
[11] and is used on the Telos Sky board. The TR1001
provides a very low-level interface to the radio medium. The
transceiver decodes data at the bit level and transmits the
bits in real-time to the CPU. Start bit detection, framing,
MAC layer, checksums, and all protocol processing must be
done in software running on the CPU. In contrast, the interface
provided by the CC2420 is at a higher level. Start bits,
framing, and parts of the MAC protocol are handled by the
transceiver. The software driver handles incoming and outgoing
data on the packet level.
Since the TR1001 operates at the bit-level, the communication
speed of the TR1001 is determined by the CPU. We
use a data rate of 9600 bits per second. The CC2420 has
a data rate of 250 kilobits per second, but also incurs some
protocol overhead as it provides a more high-level interface.
Figure 5 shows the current draw from receiving 1000
bytes of data with the TR1001 and CC2420 radio
transceivers. These measurements constitute a lower bound
on the energy consumption for receiving data over the radio,
as they do not include any control overhead caused by a code
propagation protocol. Nor do they include any packet headers
. An actual propagation protocol would incur overhead
Time
Energy
Time per
Energy per
Transceiver
(s)
(mJ)
byte (s)
byte (mJ)
TR1001
0.83
21
0.0008
0.021
CC2420
0.060
4.8
0.00006
0.0048
Table 2.
Lower bounds on the time and energy consumption
for receiving 1000 bytes with the TR1001 and
CC2420 transceivers. All values are rounded to two significant
digits.
because of both packet headers and control traffic. For example
, the Deluge protocol has a control packet overhead of
approximately 20% [14]. This overhead is derived from the
total number of control packets and the total number of data
packets in a sensor network. The average overhead in terms
of number of excessive data packets received is 3.35 [14]. In
addition to the actual code propagation protocol overhead,
there is also overhead from the MAC layer, both in terms of
packet headers and control traffic.
The TR1001 provides a low-level interface to the CPU,
which enabled us to measure only the current draw of the
receiver. We first measured the time required for receiving
one byte of data from the radio. To produce the graph in the
figure, we measured the current draw of an ESB board which
we had programmed to turn on receive mode and busy-wait
for the time corresponding to the reception time of 1000
bytes.
When measuring the reception current draw of the
CC2420, we could not measure the time required for receiving
one byte because the CC2420 does not provide an
interface at the bit level. Instead, we used two Telos Sky
boards and programmed one to continuously send back-to-back
packets with 100 bytes of data. We programmed the
other board to turn on receive mode when the on-board button
was pressed. The receiver would receive 1000 bytes of
data, corresponding to 10 packets, before turning the receiver
off. We placed the two boards next to each other on a table
to avoid packet drops. We produced the graph in Figure 5 by
measuring the current draw of the receiver Telos Sky board.
To ensure that we did not get spurious packet drops, we repeated
the measurement five times without obtaining differing
results.
Table 2 shows the lower bounds on the time and energy
consumption for receiving data with the TR1001 and
CC2420 transceivers. The results show that while the current
draw of the CC2420 is higher than that of the TR1001, the
energy efficiency in terms of energy per byte of the CC2420
is better because of the shorter time required to receive the
data.
6.1.2
Energy Consumption of Dynamic Linking
To evaluate the energy consumption of dynamic linking,
we measure the energy required for the Contiki dynamic
linker to link and load two Contiki programs. Normally,
Contiki loads programs from the radio network but to avoid
measuring any unrelated radio or network effects, we stored
the loadable object files in flash ROM before running the
experiments. The loadable objects were stored as ELF files
from which all debugging information and symbols that were
not needed for run-time linking was removed. At boot-up,
22
0
5
10
15
20
0
0.2
0.4
0.6
0.8
1
Current (mA)
Time (s)
Current
0
5
10
15
20
0
0.2
0.4
0.6
0.8
1
Current (mA)
Time (s)
Current
Figure 5. Current draw for receiving 1000 bytes with the TR1001 and CC2420, respectively.
0
5
10
15
20
0
0.1
0.2
0.3
0.4
0.5
0.6
Current (mA)
Time (s)
Current
Writing to EEPROM
Linking and relocating
Writing
ROM
Executing binary
to flash
Figure 6. Current draw for writing the Blinker ELF file
to EEPROM (0 - 0.166 s), linking and relocating the program
(0.166 - 0.418 s), writing the resulting code to flash
ROM (0.418 - 0.488 s), and executing the binary (0.488 s
and onward). The current spikes delimit the three steps
and are intentionally caused by blinking on-board LEDs.
The high energy consumption when executing the binary
is caused by the green LED.
one ELF file was copied into an on-board EEPROM from
where the Contiki dynamic linker linked and relocated the
ELF file before it loaded the program into flash ROM.
Figure 6 shows the current draw when loading the Blinker
program, and Figure 7 shows the current draw when loading
the Object Tracker program. The current spikes seen
in both graphs are intentionally caused by blinking the on-board
LEDs. The spikes delimit the four different steps that
the loader is going through: copying the ELF object file
to EEPROM, linking and relocating the object code, copying
the linked code to flash ROM, and finally executing the
loaded program. The current draw of the green LED is
slightly above 8 mA, which causes the high current draw
when executing the blinker program (Figure 6). Similarly,
when the object tracking application starts, it turns on the
radio for neighbor discovery. This causes the current draw
to rise to around 6 mA in Figure 7, and matches the radio
current measurements in Figure 5.
Table 3 shows the energy consumption of loading and
linking the Blinker program. The energy was obtained from
integration of the curve from Figure 6 and multiplying it by
0
5
10
15
20
0
0.2
0.4
0.6
0.8
1
1.2
Current (mA)
Time (s)
Current
Writing to EEPROM
ROM
Writing
to flash
Executing
binary
Linking and relocating
Figure 7. Current draw for writing the Object Tracker
ELF file to EEPROM (0 - 0.282 s), linking and relocating
the program (0.282 - 0.882 s), writing the resulting
code to flash ROM (0.882 - 0.988 s), and executing the
binary (0.988 s and onward). The current spikes delimit
the three steps and are intentionally caused by blinking
on-board LEDs. The high current draw when executing
the binary comes from the radio being turned on.
the voltage used in our experiments (4.5 V). We see that the
linking and relocation step is the most expensive in terms of
energy. It is also the longest step.
To evaluate the energy overhead of the ELF file format,
we compare the energy consumption for receiving four different
Contiki programs using the ELF and CELF formats.
In addition to the two programs from Figures 3 and 4 we include
the code for the Contiki code propagation mechanism
and a network publish/subscribe program that performs periodic
flooding and converging of information. The two latter
programs are significantly larger. We calculate an estimate of
the required energy for receiving the files by using the measured
energy consumption of the CC2420 radio transceiver
and multiply it by the average overhead by the Deluge code
propagation protocol, 3.35 [14]. The results are listed in Table
4 and show that radio reception is more energy consuming
than linking and loading a program, even for a small program
. Furthermore, the results show that the relative average
size and energy overhead for ELF files compared to the code
and data contained in the files is approximately 4 whereas
the relative CELF overhead is just under 2.
23
ELF
ELF
ELF radio
CELF
CELF
CELF radio
Code
Data
file
file size
reception
file
file size
reception
Program
size
size
size
overhead
energy (mJ)
size
overhead
energy (mJ)
Blinker
130
14
1056
7.3
17
361
2.5
5.9
Object tracker
344
22
1668
5.0
29
758
2.0
12
Code propagator
2184
10
5696
2.6
92
3686
1.7
59
Flood/converge
4298
42
8456
1.9
136
5399
1.2
87
Table 4. The overhead of the ELF and CELF file formats in terms of bytes and estimated reception energy for four Contiki
programs. The reception energy is the lower bound of the radio reception energy with the CC2420 chip, multiplied
by the average Deluge overhead (3.35).
Blinker
Energy
Obj. Tr.
Energy
Step
time (s)
(mJ)
time (s)
(mJ)
Wrt. EEPROM
0.164
1.1
0.282
1.9
Link & reloc
0.252
1.2
0.600
2.9
Wrt. flash ROM
0.070
0.62
0.106
0.76
Total
0.486
2.9
0.988
5.5
Table 3. Measured energy consumption of the storing,
linking and loading of the 1056 bytes large Blinker binary
and the 1824 bytes large Object Tracker binary. The size
of the Blinker code is 130 bytes and the size of the Object
Tracker code is 344 bytes.
Module
ROM
RAM
Static loader
670
0
Dynamic linker, loader
5694
18
CVM
1344
8
Java VM
13284
59
Table 5. Memory requirements, in bytes. The ROM size
for the dynamic linker includes the symbol table. The
RAM figures do not include memory for programs running
on top of the virtual machines.
6.2
Memory Consumption
Memory consumption is an important metric for sensor
nodes since memory is a scarce resource on most sensor node
platforms. The ESB nodes feature only 2 KB RAM and 60
KB ROM while Mica2 motes provide 128 KB of program
memory and 4 KB of RAM. The less memory required for
reprogramming, the more is left for applications and support
for other important tasks such as security which may also
require a large part of the available memory [28].
Table 5 lists the memory requirements of the static linker,
the dynamic linker and loader, the CVM and the Java VM.
The dynamic linker needs to keep a table of all core symbols
in the system. For a complete Contiki system with process
management, networking, the dynamic loader, memory allocation
, Contiki libraries, and parts of the standard C library,
the symbol table requires about 4 kilobytes of ROM. This is
included in the ROM size for the dynamic linker.
6.3
Execution Overhead
To measure the execution overhead of the application
specific virtual machine and the Java virtual machine, we
implemented the object tracking program in Figure 4 in C
and Java. We compiled the Java code to CVM code and
Java bytecode. We ran the compiled code on the MSP430-equipped
ESB board.
The native C code was compiled
Execution type
Execution time (ms)
Energy (mJ)
Native
0.479
0.00054
CVM
0.845
0.00095
Java VM
1.79
0.0020
Table 6. Execution times and energy consumption of one
iteration of the tracking program.
Execution type
Execution time (ms)
Energy (mJ)
Native
0.67
0.00075
CVM
58.52
0.065
Java VM
65.6
0.073
Table 7. Execution times and energy consumption of the
8 by 8 vector convolution.
with the MSP430 port of GCC version 3.2.3. The MSP430
digitally-controlled oscillator was set to clock the CPU at a
speed of 2.4576 MHz. We measured the execution time of
the three implementations using the on-chip timer A1 that
was set to generate a timer interrupt 1000 times per second.
The execution times are averaged over 5000 iterations of the
object tracking program.
The results in Table 6 show the execution time of one run
of the object tracking application from Figure 4. The execution
time measurements are averaged over 5000 runs of
the object tracking program. The energy consumption is calculated
by multiplying the execution time with the average
energy consumption when a program is running with the radio
turned off. The table shows that the overhead of the Java
virtual machine is higher than that of the CVM, which is turn
is higher than the execution overhead of the native C code.
All three implementations of the tracker program use the
same abstract regions library which is compiled as native
code. Thus much of the execution time in the Java VM
and CVM implementations of the object tracking program is
spent executing the native code in the abstract regions library.
Essentially, the virtual machine simply acts as a dispatcher of
calls to various native functions. For programs that spend a
significant part of their time executing virtual machine code
the relative execution times are significantly higher for the
virtual machine programs. To illustrate this, Table 7 lists the
execution times of a convolution operation of two vectors
of length 8. Convolution is a common operation in digital
signal processing where it is used for algorithms such as filtering
or edge detection. We see that the execution time of
the program running on the virtual machines is close to ten
times that of the native program.
24
Dynamic
Full image
Step
linking (mJ)
replacement (mJ)
Receiving
17
330
Wrt. EEPROM
1.1
22
Link & reloc
1.4
Wrt
. flash ROM
0.45
72
Total
20
424
Table 8. Comparison of energy-consumption of reprogramming
the blinker application using dynamic linking
with an ELF file and full image replacement methods.
Step
ELF
CELF
CVM
Java
Size (bytes)
1824
968
123
1356
Receiving
29
12
2.0
22
Wrt. EEPROM
1.9
0.80
Link
& reloc
2.5
2.5
Wrt
. flash ROM
1.2
1.2
4
.7
Total
35
16.5
2.0
26.7
Table 9. Comparison of energy-consumption in mJ of reprogramming
for the object tracking application using
the four different methods.
6.4
Quantitative Comparison
Using our model from Section 6.1 and the results from
the above measurements, we can calculate approximations
of the energy consumption for distribution, reprogramming,
and execution of native and virtual machine programs in order
to compare the methods with each other. We set P
p
, the
scale factor of the energy consumption for receiving an object
file, to the average Deluge overhead of 3.35.
6.4.1
Dynamic Linking vs Full Image Replacement
We first compare the energy costs for the two native code
reprogramming models: dynamic linking and full image replacement
. Table 8 shows the results for the energy consumption
of reprogramming the blinker application. The size
of blinker application including the operating system is 20
KB which is about 20 times the size of the blinker application
itself. Even though no linking needs to be performed
during the full image replacement, this method is about 20
times more expensive to perform a whole image replacement
compared to a modular update using the dynamic linker.
6.4.2
Dynamic Linking vs Virtual Machines
We use the tracking application to compare reprogramming
using the Contiki dynamic linker with code updates for
the CVM and the Java virtual machine. CVM programs are
typically very small and are not stored in EEPROM, nor are
they linked or written to flash. Java uncompressed class files
are loaded into flash ROM before they are executed. Table 9
shows the sizes of the corresponding binaries and the energy
consumption of each reprogramming step.
As expected, the process of updating sensor nodes with
native code is less energy-efficient than updating with a virtual
machine. Also, as shown in Table 6, executing native
code is more energy-efficient than executing code for the virtual
machines.
By combining the results in Table 6 and Table 9, we can
compute break-even points for how often we can execute native
code as opposed to virtual machine code for the same
0
20
40
60
80
100
120
140
0
20000
40000
60000
80000
100000
Consumed energy (mJ)
Number of program iterations
Java VM
ELF
CVM
CELF
Figure 8. Break-even points for the object tracking program
implemented with four different linking and execution
methods.
0
20
40
60
80
100
120
140
0
200
400
600
800
1000
Consumed energy (mJ)
Number of program iterations
Java VM
ELF
CVM
CELF
Figure 9. Break-even points for the vector convolution
implemented with four different linking and execution
methods.
energy consumption. That is, after how many program iterations
do the cheaper execution costs outweigh the more
expensive code updates.
Figure 8 shows the modeled energy consumption for executing
the Object Tracking program using native code loaded
with an ELF object file, native code loaded with an CELF
object file, CVM code, and Java code. We see that the Java
virtual machine is expensive in terms of energy and will always
require more energy than native code loaded with a
CELF file. For native code loaded with an ELF file the energy
overhead due to receiving the file makes the Java virtual
machine more energy efficient until the program is repeated
a few thousand times. Due to the small size of the CVM code
it is very energy efficient for small numbers of program iterations
. It takes about 40000 iterations of the program before
the interpretation overhead outweigh the linking and loading
overhead of same program running as native code and
loaded as a CELF file. If the native program was loaded with
an ELF file, however, the CVM program needs to be run approximately
80000 iterations before the energy costs are the
same. At the break-even point, the energy consumption is
only about one fifth of the energy consumption for loading
the blink program using full image replacement as shown in
Table 8.
In contrast with Figure 8, Figure 9 contains the break-even
points from the vector convolution in Table 7. We assume
that the convolution algorithm is part of a program with
the same size as in Figure 8 so that the energy consumption
for reprogramming is the same. In this case the break-even
points are drastically lower than in Figure 8. Here the native
code loaded with an ELF file outperforms the Java imple-25
mentation already at 100 iterations. The CVM implementation
has spent as much energy as the native ELF implementation
after 500 iterations.
6.5
Scenario Suitability
We can now apply our results to the software update scenarios
discussed in Section 2. In a scenario with frequent
code updates, such as the dynamic application scenario or
during software development, a low loading overhead is to
prefer.
From Figure 8 we see that both an application-specific
virtual machine and a Java machine may be good
choices. Depending on the type of application it may be beneficial
to decide to run the program on top of a more flexible
virtual machine such as the Java machine. The price for such
a decision is higher energy overhead.
In scenarios where the update frequency is low, e.g. when
fixing bugs in installed software or when reconfiguring an
installed application, the higher price for dynamic linking
may be worth paying. If the program is continuously run for
a long time, the energy savings of being able to use native
code outweigh the energy cost of the linking process. Furthermore
, with a virtual machine it may not be possible to
make changes to all levels of the system. For example, a bug
in a low-level driver can usually only be fixed by installing
new native code. Moreover, programs that are computation-ally
heavy benefit from being implemented as native code as
native code has lower energy consumption than virtual machine
code.
The results from Figures 8 and 9 suggest that a combination
of virtual machin code and native code can be energy
efficient. For many situations this may be a viable alternative
to running only native code or only virtual machine code.
6.6
Portability
Because of the diversity of sensor network platforms, the
Contiki dynamic linker is designed to be portable between
different microcontrollers. The dynamic linker is divided
into two modules: a generic part that parses and analyzes
the ELF/CELF that is to be loaded, and a microcontroller-specific
part that allocates memory for the program to be
loaded, performs code and data relocation, and writes the
linked program into memory.
To evaluate the portability of our design we have ported
the dynamic linker to two different microcontrollers: the TI
MSP430 and the Atmel AVR. The TI MSP430 is used in
several sensor network platforms, including the Telos Sky
and the ESB. The Atmel AVR is used in the Mica2 motes.
Table 10 shows the number of lines of code needed to
implement each module. The dramatic difference between
the MSP430-specific module and the AVR-specific module
is due to the different addressing modes used by the machine
code of the two microcontrollers. While the MSP430
has only one addressing mode, the AVR has 19 different addressing
modes. Each addressing mode must be handled dif-ferently
by the relocation function, which leads to a larger
amount of code for the AVR-specific module.
Discussion
Standard file formats.
Our main motivation behind
choosing the ELF format for dynamic linking in Contiki was
Lines of code,
Lines of code,
Module
total
relocation function
Generic linker
292
MSP430-specific
45
8
AVR-specific
143
104
Table 10. Number of lines of code for the dynamic linker
and the microcontroller-specific parts.
that the ELF format is a standard file format. Many compilers
and utilities, including all GCC utilities, are able to
produce and handle ELF files. Hence no special software is
needed to compile and upload new programs into a network
of Contiki nodes. In contrast, FlexCup [27] or diff-based
approaches require the usage of specially crafted utilities to
produce meta data or diff scripts required for uploading software
. These special utilities also need to be maintained and
ported to the full range of development platforms used for
software development for the system.
Operating system support. Dynamic linking of ELF
files requires support from the underlying operating system
and cannot be done on monolithic operating systems such as
TinyOS. This is a disadvantage of our approach. For monolithic
operating systems, an approach such as FlexCup is better
suited.
Heterogeneity. With diff-based approaches a binary diff
is created either at a base station or by an outside server. The
server must have knowledge of the exact software configuration
of the sensor nodes on which the diff script is to be
run. If sensor nodes are running different versions of their
software, diff-based approaches do not scale.
Specifically, in many of our development networks we
have witnessed a form of micro heterogeneity in the software
configuration. Many sensor nodes, which have been
running the exact same version of the Contiki operating system
, have had small differences in the address of functions
and variables in the core. This micro heterogeneity comes
from the different core images being compiled by different
developers, each having slightly different versions of the C
compiler, the C library and the linker utilities. This results
in small variations of the operating system image depending
on which developer compiled the operating system image.
With diff-based approaches micro heterogeneity poses a big
problem, as the base station would have to be aware of all
the small differences between each node.
Combination of native and virtual machine code. Our
results suggest that a combination of native and virtual machine
code is an energy efficient alternative to pure native
code or pure virtual machine code approaches. The dynamic
linking mechanism can be used to load the native code that
is used by the virtual machine code by the native code interfaces
in the virtual machines.
Related Work
Because of the importance of dynamic reprogramming of
wireless sensor networks there has been a lot of effort in the
area of software updates for sensor nodes both in the form
of system support for software updates and execution environments
that directly impact the type and size of updates as
well as distribution protocols for software updates.
26
Mainwaring et al. [26] also identified the trade-off between
using virtual machine code that is more expensive to
run but enables more energy-efficient updates and running
native code that executes more efficiently but requires more
costly updates. This trade-off has been further discussed by
Levis and Culler [19] who implemented the Mate virtual machine
designed to both simplify programming and to leverage
energy-efficient large-scale software updates in sensor
networks. Mate is implemented on top of TinyOS.
Levis and Culler later enhanced Mate by application specific
virtual machines (ASVMs) [20]. They address the main
limitations of Mate: flexibility, concurrency and propagation
. Whereas Mate was designed for a single application
domain only, ASVM supports a wide range of application
domains. Further, instead of relying on broadcasts for code
propagation as Mate, ASVM uses the trickle algorithm [21].
The MagnetOS [23] system uses the Java virtual machine
to distribute applications across an ad hoc network
of laptops. In MagnetOS, Java applications are partitioned
into distributed components. The components transparently
communicate by raising events. Unlike Mate and Contiki,
MagnetOS targets larger platforms than sensor nodes such
as PocketPC devices.
SensorWare [1] is another script-based
proposal for programming nodes that targets larger
platforms. VM* is a framework for runtime environments
for sensor networks [18]. Using this framework Koshy and
Pandey have implemented a subset of the Java Virtual Machine
that enables programmers to write applications in Java,
and access sensing devices and I/O through native interfaces.
Mobile agent-based approaches extend the notion of injected
scripts by deploying dynamic, localized and intelligent
mobile agents. Using mobile agents, Fok et al. have
built the Agilla platform that enables continuous reprogramming
by injecting new agents into the network [9].
TinyOS uses a special description language for composing
a system of smaller components [10] which are statically
linked with the kernel to a complete image of the system.
After linking, modifying the system is not possible [19] and
hence TinyOS requires the whole image to be updated even
for small code changes.
Systems that offer loadable modules besides Contiki include
SOS [12] and Impala [24]. Impala features an application
updater that enables software updates to be performed
by linking in updated modules. Updates in Impala
are coarse-grained since cross-references between different
modules are not possible. Also, the software updater in
Impala was only implemented for much more resource-rich
hardware than our target devices. The design of SOS [12]
is very similar to the Contiki system: SOS consists of a
small kernel and dynamically-loaded modules. However,
SOS uses position independent code to achieve relocation
and jump tables for application programs to access the operating
system kernel. Application programs can register
function pointers with the operating system for performing
inter-process communication. Position independent code is
not available for all platforms, however, which limits the ap-plicability
of this approach.
FlexCup [27] enables run-time installation of software
components in TinyOS and thus solves the problem that
a full image replacement is required for reprogramming
TinyOS applications. In contrast to our ELF-based solution,
FlexCup uses a non-standard format and is less portable.
Further, FlexCup requires a reboot after a program has been
installed, requiring an external mechanism to save and restore
the state of all other applications as well as the state of
running network protocols across the reboot. Contiki does
not need to be rebooted after a program has been installed.
FlexCup also requires a complete duplicate image of the
binary image of the system to be stored in external flash
ROM. The copy of the system image is used for constructing
a new system image when a new program has been loaded.
In contrast, the Contiki dynamic linker does not alter the core
image when programs are loaded and therefore no external
copy of the core image is needed.
Since the energy consumption of distributing code in sensor
networks increases with the size of the code to be distributed
several attempts have been made to reduce the size
of the code to be distributed. Reijers and Langendoen [31]
produce an edit script based on the difference between the
modified and original executable. After various optimiza-tions
including architecture-dependent ones, the script is distributed
. A similar approach has been developed by Jeong
and Culler [15] who use the rsync algorithm to generate the
difference between modified and original executable. Koshy
and Pandey's diff-based approach [17] reduces the amount
of flash rewriting by modifying the linking procedure so that
functions that are not changed are not shifted.
XNP [16] was the previous default reprogramming mechanism
in TinyOS which is used by the multi-hop reprogramming
scheme MOAP (Multihop Over-the-Air Programming)
developed to distribute node images in the sensor network.
MOAP distributes data to a selective number of nodes on
a neighbourhood-by-neighbourhood basis that avoids flooding
[34]. In Trickle [21] virtual machine code is distributed
to a network of nodes. While Trickle is restricted to single
packet dissemination, Deluge adds support for the dissemination
of large data objects [14].
Conclusions
We have presented a highly portable dynamic linker and
loader that uses the standard ELF file format and compared
the energy-efficiency of run-time dynamic linking with an
application specific virtual machine and a Java virtual machine
. We show that dynamic linking is feasible even for
constrained sensor nodes.
Our results also suggest that a combination of native and
virtual machine code provide an energy efficient alternative
to pure native code or pure virtual machine approaches. The
native code that is called from the virtual machine code can
be updated using the dynamic linker, even in heterogeneous
systems.
Acknowledgments
This work was partly financed by VINNOVA, the
Swedish Agency for Innovation Systems, and the European
Commission under contract IST-004536-RUNES. Thanks to
our paper shepherd Feng Zhao for reading and commenting
on the paper.
27
References
[1] A. Boulis, C. Han, and M. B. Srivastava. Design and implementation
of a framework for efficient and programmable sensor networks. In
Proceedings of The First International Conference on Mobile Systems,
Applications, and Services (MOBISYS `03), May 2003.
[2] Chipcon
AS.
CC2420
Datasheet
(rev.
1.3),
2005.
http://www.chipcon.com/
[3] TIS Committee. Tool Interface Standard (TIS) Executable and Linking
Format (ELF) Specification Version 1.2, May 1995.
[4] H. Dai, M. Neufeld, and R. Han. Elf: an efficient log-structured flash
file system for micro sensor nodes. In SenSys, pages 176187, 2004.
[5] A. Dunkels, B. Gronvall, and T. Voigt. Contiki - a lightweight and
flexible operating system for tiny networked sensors. In Proceedings
of the First IEEE Workshop on Embedded Networked Sensors, Tampa,
Florida, USA, November 2004.
[6] A. Dunkels, O. Schmidt, T. Voigt, and M. Ali. Protothreads: Simplifying
event-driven programming of memory-constrained embedded
systems. In Proceedings of the 4th International Conference on Embedded
Networked Sensor Systems, SenSys 2006, Boulder, Colorado,
USA, 2006.
[7] D. Estrin (editor). Embedded everywhere: A research agenda for networked
systems of embedded computers. National Academy Press, 1st
edition, October 2001. ISBN: 0309075688
[8] G. Ferrari, J. Stuber, A. Gombos, and D. Laverde, editors. Programming
Lego Mindstorms with Java with CD-ROM. Syngress Publishing,
2002. ISBN: 1928994555
[9] C. Fok, G. Roman, and C. Lu. Rapid development and flexible deploy-ment
of adaptive wireless sensor network applications. In Proceedings
of the 24th International Conference on Distributed Computing Systems
, June 2005.
[10] D. Gay, P. Levis, R. von Behren, M. Welsh, E. Brewer, and D. Culler.
The nesC language: A holistic approach to networked embedded systems
. In Proceedings of the ACM SIGPLAN 2003 conference on Programming
language design and implementation, pages 111, 2003.
[11] J. A. Gutierrez, M. Naeve, E. Callaway, M. Bourgeois, V. Mitter, and
B. Heile. IEEE 802.15.4: A developing standard for low-power low-cost
wireless personal area networks. IEEE Network, 15(5):1219,
September/October 2001.
[12] C. Han, R. K. Rengaswamy, R. Shea, E. Kohler, and M. Srivastava.
Sos: A dynamic operating system for sensor networks. In MobiSYS
'05: Proceedings of the 3rd international conference on Mobile systems
, applications, and services. ACM Press, 2005.
[13] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and K. Pister. System
architecture directions for networked sensors. In Proceedings of
the 9th International Conference on Architectural Support for Programming
Languages and Operating Systems, November 2000.
[14] J. W. Hui and D. Culler. The dynamic behavior of a data dissemination
protocol for network programming at scale. In Proc. SenSys'04,
Baltimore, Maryland, USA, November 2004.
[15] J. Jeong and D. Culler. Incremental network programming for wireless
sensors. In Proceedings of the First IEEE Communications Society
Conference on Sensor and Ad Hoc Communications and Networks
IEEE SECON (2004), October 2004.
[16] J.
Jeong,
S.
Kim,
and
A.
Broad.
Network
reprogramming
.
TinyOS
documentation,
2003.
Visited
2006-04-06.
http://www.tinyos.net/tinyos-1
.x/doc/NetworkReprogramming.pdf
[17] J. Koshy and R. Pandey.
Remote incremental linking for energy-efficient
reprogramming of sensor networks. In Proceedings of the
second European Workshop on Wireless Sensor Networks, 2005.
[18] J. Koshy and R. Pandey. Vm*: Synthesizing scalable runtime environments
for sensor networks. In Proc. SenSys'05, San Diego, CA,
USA, November 2005.
[19] P. Levis and D. Culler. Mate: A tiny virtual machine for sensor networks
. In Proceedings of ASPLOS-X, San Jose, CA, USA, October
2002.
[20] P. Levis, D. Gay, and D Culler. Active sensor networks. In Proc.
USENIX/ACM NSDI'05, Boston, MA, USA, May 2005.
[21] P. Levis, N. Patel, D. Culler, and S. Shenker. Trickle: A self-regulating
algorithm for code propagation and maintenance in wireless sensor
networks. In Proc. NSDI'04, March 2004.
[22] J. Lilius and I. Paltor.
Deeply embedded python, a virtual
machine for embedded systems.
Web page.
2006-04-06.
http://www.tucs.fi/magazin/output.php?ID=2000.N2.LilDeEmPy
[23] H. Liu, T. Roeder, K. Walsh, R. Barr, and E. Gun Sirer. Design and
implementation of a single system image operating system for ad hoc
networks. In MobiSys, pages 149162, 2005.
[24] T. Liu, C. Sadler, P. Zhang, and M. Martonosi. Implementing software
on resource-constrained mobile sensors: Experiences with Impala and
ZebraNet. In Proc. Second Intl. Conference on Mobile Systems, Applications
and Services (MOBISYS 2004), June 2004.
[25] G. Mainland, L. Kang, S. Lahaie, D. C. Parkes, and M. Welsh. Using
virtual markets to program global behavior in sensor networks. In Proceedings
of the 2004 SIGOPS European Workshop, Leuven, Belgium,
September 2004.
[26] A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler, and J. Anderson.
Wireless sensor networks for habitat monitoring. In First ACM Workshop
on Wireless Sensor Networks and Applications (WSNA 2002),
Atlanta, GA, USA, September 2002.
[27] P. Jose Marron, M. Gauger, A. Lachenmann, D. Minder, O. Saukh, and
K. Rothermel. Flexcup: A flexible and efficient code update mechanism
for sensor networks. In European Workshop on Wireless Sensor
Networks, 2006.
[28] A. Perrig, R. Szewczyk, V. Wen, D. E. Culler, and J. D. Tygar. SPINS:
security protocols for sensor netowrks. In Mobile Computing and Networking
, pages 189199, 2001.
[29] J. Polastre, R. Szewczyk, and D. Culler. Telos: Enabling ultra-low
power wireless research. In Proc. IPSN/SPOTS'05, Los Angeles, CA,
USA, April 2005.
[30] N. Ramanathan, E. Kohler, and D. Estrin. Towards a debugging system
for sensor networks. International Journal for Network Management
, 3(5), 2005.
[31] N. Reijers and K. Langendoen. Efficient code distribution in wireless
sensor networks. In Proceedings of the 2nd ACM international conference
on Wireless sensor networks and applications, pages 6067,
2003.
[32] RF Monolithics.
868.35 MHz Hybrid Transceiver TR1001, 1999.
http://www.rfm.com
[33] J. Schiller, H. Ritter, A. Liers, and T. Voigt. Scatterweb - low power
nodes and energy aware routing. In Proceedings of Hawaii International
Conference on System Sciences, Hawaii, USA, 2005.
[34] T. Stathopoulos, J. Heidemann, and D. Estrin. A remote code update
mechanism for wireless sensor networks. Technical Report CENS-TR
-30, University of California, Los Angeles, Center for Embedded
Networked Computing, November 2003.
[35] M. Welsh and G. Mainland. Programming sensor networks using abstract
regions. In Proc. USENIX/ACM NSDI'04, San Francisco, CA,,
March 2004.
[36] G. Werner-Allen, P. Swieskowski, and M. Welsh. Motelab: A wireless
sensor network testbed. In Proc. IPSN/SPOTS'05, Los Angeles, CA,
USA, April 2005.
28 | Wireless sensor networks;Embedded systems;Dynamic linking;Operating systems;Virtual machines |
171 | S2DB : A Novel Simulation-Based Debugger for Sensor Network Applications | Sensor network computing can be characterized as resource-constrained distributed computing using unreliable, low bandwidth communication. This combination of characteristics poses significant software development and maintenance challenges. Effective and efficient debugging tools for sensor network are thus critical. Existent development tools, such as TOSSIM, EmStar, ATEMU and Avrora, provide useful debugging support, but not with the fidelity, scale and functionality that we believe are sufficient to meet the needs of the next generation of applications. In this paper, we propose a debugger, called S2DB, based on a distributed full system sensor network simulator with high fidelity and scalable performance, DiSenS. By exploiting the potential of DiSenS as a scalable full system simulator, S2DB extends conventional debugging methods by adding novel device level, program source level, group level, and network level debugging abstractions. The performance evaluation shows that all these debugging features introduce overhead that is generally less than 10% into the simulator and thus making S2DB an efficient and effective debugging tool for sensor networks. | INTRODUCTION
Sensor networks, comprised of tiny resource-constrained devices
connected by short range radios and powered by batteries, provide
an innovative way to implement pervasive and non-intrusive envi-ronmental
instrumentation and (potentially) actuation. The resource-constrained
nature of sensor network devices poses significant software
development and maintenance challenges. To prolong battery
life and promote miniaturization, most devices have little memory,
use low-power and unreliable radios, and run long duty cycles. In
addition to these per-device constraints, by definition sensor networks
are also distributed systems, with all of the concomitant synchronization
and consistency concerns that distributed coordination
implies.
For these reasons, effective debugging support is critical. A number
of sensor network development systems [2, 18, 3, 17, 13, 6]
provide debugging support for individual devices and/or the complete
network. However, they all have their limitations. Some rely
on hardware support, subject to the same resource constraints that
as the programs on which they operate. Some only monitor the network
radio traffic. And most importantly, as networks scale, these
tools become difficult to apply to the details of collections of interacting
sensor nodes.
In this paper, we present a new approach that is based on scalable
full system sensor network simulation with enhanced debugging
features. Our debugging tool is called S
2
DB (where S
2
stands for
Simulation and Sensor network). The goal of S
2
DB is to adapt
conventional debugging methods to sensor network applications so
that we can have better control of hardware details and debug the
complete sensor network in a coordinated way. Our approach relies
upon four principle innovations in the area of debugging resource
constrained devices.
At the single device level, we introduce the concept of debugging
point a generalized notion of break point, watch point,
and state interrogation that permits state display from all
sensor device subsystems (flash pages, buffers, etc.);
Also at the device level, we introduce virtual registers within
the simulator to support source level instrumentation and tracing
. The access to these registers does not affect the correct
functioning of other components;
At the multi-device level, we introduce a coordinated break
condition, which enables the coordinated execution control
of multiple devices;
Finally, at the network level, we provide a "time traveling"
facility to use with network level trace analysis, so that error
site can be rapidly restored for detailed inspection.
S
2
DB is built upon DiSenS [25], a scalable distributed full system
sensor network simulator DiSenS has a distributed simulation
framework. Individual sensor devices are emulated in separated operating
system threads. DiSenS then partitions and schedules these
device emulations to the computer nodes of a cluster, and simulates
inter-device communication at the radio level (i.e. below the communication
protocol stack and radio hardware device interfaces).
Sensor device emulations in DiSenS are cycle-accurate. Moreover,
a plugin mechanism allows the insertion of power models and radio
models with different fidelity levels. Thus DiSenS is capable of accurate
, large-scale sensor network simulation where the application
and operating system code can be executed, unmodified on native
hardware.
DiSenS benefits our design and implementation in many aspects.
Its simulator infrastructure gives us the full control of device states,
which enables the design of debugging points. Its high performance
makes our debugger execute efficiently. Its scalability enables us
to debug large-scale sensor networks. While the availability of a
high-fidelity radio model for sensor network radio remains elusive
(making many senor network implementors reluctant to embrace
simulation and/or emulation), we believe the ability to debug sensor
network programs at scale as a precursor to actual deployment will
cut development time and reduce the amount of in situ debugging
that will be required in an actual deployment.
We also wish to emphasize that in this paper we do not claim
S
2
DB adequately addresses many of the thorny difficulties associated
with all debugging tools (e.g. the ability to debug optimized
code). Rather our focus is on innovations that we believe are important
to the development of large-scale senor network deploy-ments
and that also improve the current state-of-the-practice in sensor
network debugging. In Section 2, we first give the background
of sensor network debugging. In Section 3, we briefly introduce
the features and details of DiSenS that are relevant to our debugging
purpose. In Section 4, we introduce the debugging point and
its use with break conditions. We also present the design of virtual
hardware based source level instrumentation. In Section 5, we
discuss how to control the execution of multiple devices in a coordinated
way. We focus on the implementation detail in DiSenS
infrastructure. In Section 6, we talk about the checkpoint implementations
for fast time traveling. We evaluate the performance of
our enhancing techniques in Section 7. And we conclude our work
in Section 8.
RELATED WORK
Like most embedded devices, sensor network devices can be debugged
with special hardware support. For motes (e.g. Mica2 and
MicaZ), Atmel's AVR JTAG ICE (In-Circuit Emulator) [2] is one
of the popular hardware-based debuggers. Atmel's AVR family
of microcontrollers (that are currently used as the processing elements
in many mote implementations) has built-in debugging support
, called On-Chip Debugging (OCD). Developers can access the
OCD functions via JTAG [10] hardware interface. With JTAG ICE,
developers can set break points, step-execute program and query
hardware resources. JTAG ICE can also be used with GUI interfaces
or a GDB debugging console. Hardware-based approaches
such as JTAG ICE typically have their limitations. For example, it
is not possible to synchronize the states of program execution with
I/O systems in debugging. This is because when the program execution
is stopped in JTAG ICE, the I/O system continues to run at
full speed [1]. Also since the debugging support is only provided
with the processing unit (i.e. the microcontroller), it is not easy to
interrogate the state of other on-board systems, like flash memory.
In contrast, by working with the full system DiSenS simulations,
S
2
DB does not suffer from these limitations.
At network level, many monitoring and visualization tools like
Sympathy [18, 19], SpyGlass [3], Surge Network Viewer [22] and
Mote-VIEW [16] provide a way to trace, display and analyze network
activities for a physical sensor network. These tools usually
use a software data collecting module running on sensor nodes in
the network. The collected data is transferred using flooding or
multihop routing to the gateway node. The gateway node then forwards
the data to a PC class machine for analysis or visualization.
These tools are useful for displaying the network topology and and
analyzing the dynamics of data flow, particularly with respect to
specific inter-node communication events. Tools like Sympathy
even specialize in detecting and localizing sensor network failures
in data collection applications. However, these monitoring may be
intrusive in that they share many of the scarce device resources they
use with the applications they are intended to instrument. These
tools may complement what we have with S
2
DB . When a communication
anomaly is detected, for example, often a program-level
debugger may still be necessary to pinpoint the exact location of
error in code.
More generally, while debugging on real hardware is the ultimate
way to verify the correctness of sensor network applications
, simulation based debuggers provide complementary advantages
that have been successfully demonstrated by other projects.
Many sensor network simulators, like TOSSIM [13], ATEMU [17],
Avrora [23] and EmStar [6], provide significant debugging capabilities
. TOSSIM is a discrete event simulator for TinyOS applications
. It translates the TinyOS code into emulation code and links
with the emulator itself. So debugging with TOSSIM is actually
debugging the emulator. Developers have to keep in their mind
the internal representation of device states. While discrete event
simulators are useful for verifying functional correctness, they typically
do not capture the precise timing characteristics of device
hardware, and thus have limited capability in exposing errors in
program logic. In contrast, full system simulators, such as ATEMU
and Avrora, have much higher fidelity. ATEMU features a source
level debugger XATDB, which has a graphic frontend for easy use.
XATDB can debug multiple sensor devices, but can only focus on
one at a time. Avrora provides rich built-in support for profiling
and instrumentation. User code can be inserted at any program address
, watches can be attached to memory locations, and specific
events can be monitored. These facilities can be quite useful for
debugging purposes. Indeed, we extend Avrora's probe and watch
concepts in the development of S
2
DB's debugging points (cf. Section
4). In addition to this support for simulator instrumentation,
S
2
DB also provides a source code level instrumentation facility,
via virtual debugging registers, since it is easier to use for some
debugging problems.
Time traveling for debugging is currently the subject of much
research [11, 20] in the field of software system development and
virtualization. Flashback [20] is a lightweight extension for rollback
and replay for software debugging. Flashback uses shadow
processes to take snapshots of the in-memory states of a running
process and logs the process' I/O events with the underlying system
to support deterministic rollback or replay. VMM (virtual machine
monitor) level logging is used in [11] for replaying the system executing
in a virtual machine. Checkpointing the state of a full system
simulator is easier than that in a real OS or virtual machine monitor
since all the hardware are simulated in software. Our results show
that time traveling support in DiSenS has very low overhead due to
the simpleness of sensor hardware it emulates.
THE DiSenS SIMULATOR
S
2
DB is built upon DiSenS [25], a distributed sensor network
simulator designed for high fidelity and scalable performance. DiSenS
provides sensor network applications an execution environment
as "close" to real deployment as possible. DiSenS is also able
103
to simulate a sensor network with hundreds of nodes in real time
speed using computer clusters. In this section, we briefly introduce
the design aspects of DiSenS that are relevant to the implementation
of S
2
DB . The complete discussion and evaluation of DiSenS
are in papers [25, 24].
3.1
Full System Device Simulation
The building blocks of DiSenS are full system device simulators,
supporting popular sensor network devices, including iPAQ [9],
Stargate [21] and Mica2/MicaZ motes [15]. In this paper, we confine
our description to the functionality necessary for debugging
mote applications. However, the same functionality is implemented
for more complex devices such as the iPAQ and Stargate. A more
full examination of debugging for heterogeneous sensor devices is
the subject of our future work.
The mote device simulator in DiSenS supports most of the Mica2
and MicaZ hardware features, including the AVR instruction set,
the ATmega128L microcontroller (memories, UARTs, timers, SPI
and ADC, etc.), the on-board Flash memory, CC1000 (Mica2) and
CC2420 (MicaZ) radio chips and other miscellaneous components
(like sensor board, LEDs, etc.).
The core of the device simulator is a cycle-accurate AVR instruction
emulator. The instruction emulator interacts with other hardware
simulation components via memory mapped I/O. When an
application binary is executed in the simulator, each machine instruction
is fed into the instruction emulator, shifting the internal
representation of hardware states accordingly and faithfully. Asyn-chronous
state change is modelled as events. Events are scheduled
by hardware components and kept in an event queue. The instruction
emulator checks the event queue for each instruction execution
, triggering timed events. The collection of simulated hardware
features is rich enough to boot and execute unmodified binaries of
TinyOS [8] and most sensor network applications, including Surge,
TinyDB [14] and Deluge [4]. By correctly simulating hardware
components, the device simulator ensures the cycle accuracy, providing
the basis of faithful simulation of a complete sensor network
.
The full system device simulator in DiSenS also presents extension
points or "hooks" for integrating power and radio models.
This extensible architecture provides a way to support the development
of new models and to trade simulation speed for level of
accuracy. For debugging, this extensibility enables developers to
test applications with different settings. For example, radio models
representing different environments (like outdoor, indoor, etc.) can
be plugged in to test applications under different circumstances.
In i t s defaul t confi gurat i on, Di S enS i ncorporat es an accurat e power
model from [12], a simple linear battery model, a basic lossless radio
model, and a simple parameterized statistical model. The structure
of the system, however, incorporates these models as modules
that can be replaced with more sophisticated counterparts.
3.2
Scalable Distributed Simulation
DiSenS's ability to simulate hundreds of mote devices using distributed
cluster computing resources is its most distinctive feature.
This level of scalability makes it possible to experiment with large
sensor network applications before they are actually deployed and
to explore reconfiguration options "virtually" so that only the most
promising need to be investigated in situ. As a debugging tool,
DiSenS's scalability allows developers to identify and correct problems
associated with scale. For example, a data sink application
may work well in a network of dozens of nodes, but fails when the
network size increases to hundreds, due to the problems such as
insufficient queue or buffer size. Even for small scale network, the
scalability is useful because it translates into simulation speed, and
thus debugging efficiency.
DiSenS achieves its scalability by using a simple yet effective
synchronization protocol for radio simulation and applying automatic
node partition algorithms to spread the simulation/emulation
workload across machines in a computer cluster. In DiSenS, sensor
nodes are simulated in parallel, each running in its own operating
system thread and keeping its own virtual clock. Sensor nodes interact
with each other only in the radio transmission, during which
radio packets are exchanged. The radio interaction of sensor nodes
can be abstracted into two operations: read radio channel and write
radio channel. The analysis [25] shows that only when a node reads
radio channel, it needs to synchronize its clock with its neighbors
(i.e., potential radio transmitters in its radio range). This ensures
that each receiving node receives all the packets it is supposed to
receive. A primitive called wait on sync is introduced to perform
this synchronization, which forces the caller to wait for neighbor
nodes to catch up with its current clock time. To implement this
protocol, each node also has to keep its neighbors updated about its
clock advance by periodically sending out its current clock time. A
more detailed description and analysis of this protocol is in [25].
To utilize distributed computing r esources, D iS enS partitions nodes
into groups, each simulated on one machine within a cluster. Communication
between sensor nodes assigned to the same machine
is via a shared-memory communication channel. However, when
motes assigned to distinct machines communicate, that communication
and synchronization must be implemented via a message
pass between machines. Due to the relatively large overhead of
remote synchronization via message passing (caused by network
latency), partitioning of simulated nodes to cluster machines plays
an important role in making the ensemble simulation efficient.
To address this problem, graph-partitioning algorithms, originally
developed for tightly-coupled data-parallel high-performance
computing applications, are employed. DiSenS uses a popular partitioning
package [7] to partition nodes nearly optimally.
Our S
2
DB debugging tool is built upon DiSenS , whose design
has huge impact on how the debugging facilities that we have implemented
, including both advantages and limitations. In the next 3
sections, we'll discuss how DiSenS interacts with S
2
DB to support
both conventional and novel debugging techniques.
DEBUGGING INDIVIDUAL DEVICES
S
2
DB was first built as a conventional distributed debugger on
the DiSenS simulator. Each group of sensor nodes has a standalone
debugging proxy waiting for incoming debugging commands. A
debugger console thus can attach to each individual sensor node
via this group proxy and perform debugging operations. The basic
S
2
DB includes most functions in a conventional debugger, like
state (register and memory) checking, break points and step execution
, etc.
In this section, we discuss how we exploit the potential of a simulation
environment to devise novel techniques for debugging single
sensor devices.
4.1
Debugging Point
Debugging is essentially a process of exposing program's internal
states relevant to its abnormal behavior and pinpointing the
cause. Visibility of execution states is a determining factor of how
difficult the debugging task is. Building upon a full system simulator
for each device gives S
2
DB a great potential to expose time
synchronized state.
Conventional debuggers essentially manipulate three states of a
program: register, memory and program counter (PC). Simulators
104
Component
Parameters
Value
Interrupt
Watchable
Overhead
PC (pc)
microcontroller
none
Int
No
Yes
Large
Register (reg)
microcontroller
address
Int
No
Yes
Large
Memory Read (mem rd)
SRAM
address
Boolean
No
Yes
Small
Memory Write (mem wr)
SRAM
address
Boolean
No
Yes
Small
Memory (mem)
SRAM
address
Int
No
Yes
Small
Flash Access (flash access)
Flash
command, address
Boolean
No
Yes
Small
Flash (flash)
Flash
address
Int
No
Yes
Small
Power Change (power)
Power Model
none
Float
No
Yes
Small
Timer Match (timer)
Timers
none
Boolean
Yes
No
Small
Radio Data Ready (spi)
SPI (radio)
none
Boolean
Yes
No
Small
ADC Data Ready (adc)
ADC (radio/sensor)
none
Boolean
Yes
No
Small
Serial Data Received (uart)
UART
none
Boolean
Yes
No
Small
Clock (clock)
Virtual
none
Int
No
Yes
Minimal
Radio Packet Ready (packet)
Radio Chip
none
Packet
No
Yes
Small
Program Defined (custom)
Virtual Debugging Hardware
ID
Int
No
Yes
Program defined
Table 1: The current set of debugging points in S
2
DB .
can provide much more abundant state information, which may
enable or ease certain debugging tasks. For example, to debug a
TinyOS module that manages on-board flash memory, it is important
for the internal buffers and flash pages to be displayed directly.
It is straightforward for DiSenS but rather difficult in a conventional
debugger, which has to invoke complex code sequence to access the
flash indirectly.
We carefully studied the device states in DiSenS and defined a
series of debugging points. A debugging point is the access point
to one of the internal states of the simulated device. The device
state that is exposed by a debugging point can then be used by the
debugger for displaying program status and controlling program
execution, e.g., break and watch, as that in a conventional debugger
. In this sense, debugging points have extended our debugger's
capability of program manipulation.
Table 1 lists the current set of debugging points defined in S
2
DB.
It is not a complete list since we are still improving our implementation
and discovering more meaningful debugging points. In the
table, the first column shows the debugging point name and the
abbreviated notation (in parentheses) used by the debugger console
. The corresponding hardware component that a debugging
point belongs to is listed in the second column. The third and fourth
columns specify the parameters and return value of a debugging
point. For example, the "memory" point returns the byte content
by the given memory address. The fifth column tells whether a debugging
point has an interrupt associated. And the sixth column
specifies whether a watch can be added to the point. The last column
estimates the theoretical performance overhead of monitoring
a particular debugging point.
As we see in the table, the common program states interrogated
by convent i onal debugger s , i . e . r egi s t e r, memor y and pr ogr am count er,
are also generalized as debugging points in S
2
DB , listed as reg,
mem and pc. For memory, we also introduced two extra debugging
points, mem rd and mem wr, to monitor the access to memory
in terms of direction. Notice that debugging points have different
time properties: some are persistent while others are transient. In
the memory case, the memory content, mem, is persistent, while
memory accesses, mem rd and mem wr, are transient. They are
valid only when memory is read or written.
Similarly, the on-board flash has two defined debugging points:
one for the page content (flash) and the other (flash access) for the
flash access, including read, program and erase. The power debugging
point is used to access the simulated power state of the device,
which may be useful for debugging power-aware algorithms.
Four important hardware events are defined as debugging points:
timer match event (timer), radio (SPI) data ready (spi), ADC data
ready (adc) and serial data ready (uart). They are all transient and
all related to an interrupt. These debugging points provide a natural
and convenient way to debug sensor network programs since
many of these programs are event-driven, such as TinyOS and its
application suite. As an example, if we want to break the program
execution at the occurrence of a timer match event, we can simply
invoke the command:
> break when timer() == true
In a more conventional debugger, a breakpoint is typically set in
the interrupt handling code, the name of which must be known to
the programmer. Furthermore, breaking on these event-based debugging
points is much more efficient than breaking on a source
code line (i.e., a specific program address). This is because matching
program addresses requires a comparison after the execution
of each instruction while matching event-based debugging points
only happens when the corresponding hardware events are triggered
, which occur much less frequently. We will discuss how to
use debugging points to set break conditions and their overhead in
later this subsection.
The clock debugging point provides a way for accurate timing
control over program execution. It can be used to fast forward the
execution to a certain point if we know that the bug of our interest
will not occur until after a period of time. It would be rather difficult
to implement this in a conventional debugger since there is no
easy way to obtain accurate clock timing across device subsystems.
It is also possible to analyze the states and data in the simulator to
extract useful high-level semantics and use them to build advanced
debugging points. An example is the recognition of radio packet.
The Mica2 sensor device uses the CC1000 radio chip, which operates
at the byte level. Thus an emulator can only see the byte stream
transmitted from/to neighbor nodes and not packet boundaries. For
application debugging, however, it is often necessary to break program
execution when a complete packet has been transmitted or
received. A typical debugging strategy is to set a breakpoint in the
radio software stack at the the line of code line that finishes a packet
reception. However, this process can be both tedious and unreliable
(e.g. software stack may change when a new image is installed),
especially during development or maintenance of the radio stack
itself. Fortunately, in the current TinyOS radio stack implementation
, the radio packet has a fixed format. We implemented a tiny
radio packet recognizer in the radio chip simulation code. A "radio
packet ready" (packet) debugging point is defined to signal the state
105
when a complete packet is received. These extracted high-level semantics
are useful because we can debug applications without relying
on the source code, especially when the application binary is
optimized code and it is hard to associate exact program addresses
with specific source code line. However, discovering these semantics
using low-level data/states is challenging and non-obvious (at
least, to us) and as such continues to be a focus of our on-going
research in this area.
4.1.1
Break Conditions Using Debugging Points
Debugging points are used in a functional form. For example, if
we want to print a variable X, we can use:
> print mem(X)
To implement conditional break or watch points, they can be included
in imperatives such as:
> break when flash_access(erase, 0x1)
which breaks the execution when the first page of the flash is erased.
It is also possible to compose them:
> break when timer() && mem(Y) > 1
which breaks when a timer match event occurs and a state variable
Y , like a counter, is larger than 1.
The basic algorithm for monitoring and evaluating break conditions
is as follows. Each debugging point maintains a monitor
queue. Whenever a break point is set, its condition is added to the
queue of every debugging point that is used by the condition. Every
time the state changes at a debugging point, the conditions in
its queue is re-evaluated to check whether any of them is satisfied.
If so, one of the break points is reached and the execution is suspended
. Otherwise, the execution continues.
Note that the monitoring overhead varies for different debugging
points revealing the possibility for optimization of the basic condition
evaluating algorithm. The monitoring overhead is determined
by the frequency of state change at a debugging point. Obviously,
pc has the largest overhead because it changes at each instruction
execution. Event related debugging points have very low overhead
since hardware events occur less frequently. For example, the timer
event may be triggered for every hundreds of cycles. Clock logi-cally
has a large overhead since it changes every clock cycle. However
, in simulation, clock time is checked anyway for event triggering
. By implementing the clock monitoring itself as an event, we
introduce no extra overhead for monitoring clock debugging point.
Thus we are able to optimize the implementation of condition
evaluation. For example, considering the following break condition
:
> break when pc() == foo && mem(Y) > 1
Using the basic algorithm, the overhead of monitoring the condition
is the sum of pc's overhead and mem's overhead. However, since
the condition is satisfied when both debugging points match their
expression, we could only track mem since it has smaller overhead
than pc. When mem is satisfied, we then continue to check pc. In
this way, the overall overhead reduces.
Now we present the general condition evaluation algorithm. Given
a condition as a logic expression, C, it is first converted into canonical
form using product of maxterms:
C = t
1
t
2
... t
n
(1)
where t
i
is a maxterm. The overhead function f
ov
is defined as the
total overhead to monitor all the debugging points in a maxterm.
Then we sort the maxterms by the value of f
ov
(t
i
) in incremental
order, say, t
k
1
, ..., t
k
n
. We start the monitoring of C first using
maxterm t
k
1
by adding C to all the debugging points that belong
to t
k
1
. When t
k
1
is satisfied, we re-evaluate C and stop if it is
true. Otherwise, we remove C from t
k
1
's debugging points and
start monitoring t
k
2
. If t
k
n
is monitored and C is still not satisfied,
we loop back to t
k
1
. We repeat this process until C is satisfied. If
C is unsatisfiable, this process never ends.
Debugging points give us powerful capability to debug sensor
network programs at a level between the hardware level and the
source-code level. However, a direct instrumentation of the source
code i s somet i m es easi est and m ost s t r ai ght - f or war d debuggi ng met hod.
The typical methodology for implementing source-level instrumentation
is to use print statements to dump states. Printing, however,
can introduce considerable overhead that can mask the problem being
tracked.
In S
2
DB we include an instrumentation facility based on virtual
registers that serves the same purpose with reduced overhead. We
introduce our instrumentation facility in the next subsection.
4.2
Virtual Hardware Based Source Code
Instrumentation
Sensor devices are usually resource-constrained, lacking the necessary
facility for debugging in both hardware and software. On
a Mica2 sensor device, the only I/O method that can be used for
display internal status by the program is to flash the three LEDs,
which is tedious and error-prone to decode. DiSenS faithfully simulates
the sensor hardware, thus inheriting this limitation. Because
we insist that DiSenS maintain binary transparency with the native
hardware it emulates, the simulated sensor network program is not
able to perform a simple "printf".
To solve this problem, we introduce three virtual registers as an
I/O channel for the communication between application and simulator
. Their I/O addresses are allocated in the reserved memory
space of ATmega128L. Thus the access of these virtual registers
will not affect the correct functioning of other components. Table 2
lists the three registers and their functions.
Address
Name
Functionality
0x75
VDBCMD
Command Register
0x76
VDBIN
Input Register
0x77
VDBOUT
Output Register
Table 2: Virtual registers for communication between application
and simulator.
The operation of virtual registers is as follows: an application
first issues a command in the command register, VDBCMD; then
the output data is transferred via VDBOUT register and the input
data is read from em VDBIN register. The simplest application
of virtual registers is to print debugging messages by first sending
a "PRINT" command and then continuously writing the ASCII
characters in a string to the VDBOUT register until a new line is
reached. On the simulator side, whenever a command is issued, it
either reads from the VDBOUT register or sending data to VDBIN.
In the print case, when the simulator gets all the characters (ended
by a new line), it will print out on the host console of the simulating
machine.
A more advanced use of virtual registers is to control a debugging
point. We term this combination of virtual registers and debugging
points a program defined debugging point (custom, as listed
in the last line of Table 1). The state of a custom debugging point is
generated by the instrumentation code in the program. To do so, the
instrumentation code first sends a "DEBUG" command to the VD-106
BCMD register, then outputs the debugging data on the VDBOUT
register, in the form of a tuple, < id, value >. The id is used to
identify the instrumentation point in the source code and the value
is any value generated by the instrumented code. If there is a break
condition registered at this point, it will be checked against the tuple
and execution will stop when it is matched. As an example, if
we want to break at the
10th entry of a function, we can instrument
the function and keep a counter of entries. Every time the
counter changes, we output the counter value via virtual registers.
The break condition will be satisfied when the value equals to
10.
To make it easy to use, we developed a small C library for accessing
the virtual registers transparently. Developers can invoke
accessing functions on these registers by simply calling the C APIs,
for example, in a TinyOS program.
Instrumentation via the virtual registers has the minimal intru-siveness
on application execution. When generating a debugging
point event by sending a < id, value > tuple, only three register
accesses are needed if both values in the tuple are
8-bits each (one
for the command and two for the data).
COORDINATED PARALLEL DEBUGGING OF MULTIPLE DEVICES
DiSenS's scalability and performance enables S
2
DB to debug
large cooperating ensembles of sensors as a simulated sensor network
deployment. Like other debuggers, S
2
DB permits its user to
attach to and "focus" on a specific sensor while the other sensors
in the ensemble execute independently. However, often, more systematic
errors emerge from the interactions among sensor nodes
even when individual devices and/or applications are functioning
correctly. To reveal these kinds of errors, developers must be able
to interrogate and control multiple sensor devices in a coordinated
way.
Debugging a program normally involves displaying program status
, breaking program execution at arbitrary points, step-executing,
etc. By extending this concept to parallel debugging, we want to be
able to:
1. Display the status of multiple devices in parallel;
2. Break the execution of multiple devices at certain common
point;
3. Step-execute multiple devices at the same pace.
The first and third items in the above "wish list" are easy to implement
in a simulation context. S
2
DB can simply "multicast" its debugging
commands to a batch of sensor nodes once their execution
stop at a certain common point. As for the second item, since DiSenS
is, in effect, executing multiple parallel simulations without
a centralized clock, implementing a time-correlated and common
breakpoint shares the same coordination challenges with in parallel
debugging counterpart.
The simplest form of coordinated break is to pause the execution
of a set of involved nodes at a specific virtual time, T :
> :break when clock() == T
where the colon before "break" indicates that it is a batch command
and will be sent to all the nodes in a global batch list (maintained
by other commands).
It is necessary to review DiSenS's synchronization mechanism
first. We summarize the major rules as follows:
1. A node that receives or samples radio channel must wait for
all its neighbors to catch up with its current clock time;
A
C
B
D
Y
Receive Receive
Update Transmit
X
clock update & byte transmission
clock update
Figure 1: Illustration of synchronization between sensor nodes
in DiSenS . Dashed arrows indicate the update and transmission
messages. A < B < C < D.
2. All nodes must periodically broadcast their clock updates to
neighbors;
3. Before any wait, a node must first send its clock update (to
avoid loop waiting);
4. Radio byte is always sent with a clock update at the end of
its last bit transmission.
Figure 1 illustrates the process. At time point A, node X receives.
It first sends an update of its clock and wait for its neighbor Y
(rule 3 & 1). Y runs to B and sends its clock update (rule 2), which
wakes up X. X proceeds to C and receives again. Y starts a byte
transmission at B. At D, the last bit transmitted and so the byte
along with a clock update is sent to X (rule 4). X receives the
byte, knowing Y passes its current time, and proceeds.
B
Y
X
A
Receive
C
Break point
Last update Next update
update for break
Figure 2: Break at a certain point of time. Dashed arrows indicate
the update and transmission messages. B < A < C.
Now, let's see what happens when we ask multiple nodes to stop
at the same time. Figure 2 shows one case of the situation. X
receives at time A and sends an update and waits for Y . Y sends
an update at B. Its next update time is C. But we want to break at
a point before C but after A. Since Y breaks (thus waits), it sends
an update (rule 3). X receives the update, wakes up, proceeds to
the break point and stops. Now both X and Y are stopped at the
same time point.
In Figure 3, the situation is similar to the case in Figure 2. The
difference is that now the break point is in the middle of a byte
transmission for Y . Y can not just send an update to X and let
X proceeds to break point as in Figure 2. because if X gets the
update from Y , it believes Y has no byte to send up to the break
point and will continue its radio receiving logic. Thus the partial
byte from Y is lost. This problem is caused by rule 4. We solve
it by relaxing the rule: Whenever a node is stopped (thus it waits)
in the middle of a byte transmission, the byte is pre-transmitted
with the clock update. We can do this because mote radio always
transmits in byte unit. Once a byte transmission starts, we already
107
B
Y
X
A
Receive
C
Break point
Transmit start Transmit
update & pre-transmit
Figure 3: Extension to the synchronization protocol: pre-transmission
. Dashed arrows indicate the update and transmission
messages. B < A < C.
know its content. Also, in DiSenS , each byte received by a node is
buffered with timestamp. It will be processed only when the time
matches the local clock. With this relaxed rule, we are now able to
stop multiple sensor nodes at the same virtual time.
The next question is how to perform a conditional break on multiple
nodes. Notice that we cannot simply implement:
> :break when mem(X) > 3
because it asks the nodes to break independently. Whenever a node
breaks at some point, other nodes with direct or indirect neighborhood
relationship with it will wait at indeterminate points due to
the synchronization requirement. Whether they all satisfy the condition
is not clear. A reasonable version of this command is:
> :break when *.mem(X) > 3
or
> :break when node1.mem(X) > 3
&& ... && nodek.mem(X) > 3
which means "break when X > 3 for all the nodes". In the general
form, we define a coordinated break as a break with condition
cond
1
cond
2
... cond
k
, where cond
i
is a logic expression
for node i.
000000000000000
000000000000000
111111111111111
111111111111111
000000000
000000000
111111111
111111111
00000000000000
00000000000000
11111111111111
11111111111111
X
Y
Z
A
B
C
D
Condition satisfied
Figure 4: Coordinated break. The shaded boxes represent the
time range during which a local condition is satisfied. Between
C and D, the global condition is satisfied. A < B < C < D.
Figure 4 illustrates the meaning of this form of breakpoint. The
shaded boxes are the time period during which the local condition
for a node is satisfied. In Figure 4, the global condition, i.e.
cond
x
cond
y
cond
z
, is satisfied between time C and D. Time
C is the exact point where we want to break.
Before we present the algorithm that implements coordinated
break, we need to first introduce a new synchronization scheme.
We call it partially ordered synchronization. By default DiSenS
implements peer synchronization: all the nodes are running in arbitrary
order except synchronized during receiving or sampling. The
new scheme imposes a partial order. In this scheme, a node master
is first specified. Then all the other nodes proceed by following the
master node. That is, at any wall clock time t
wall
(i.e., the real
world time), for any node
i
, clock
i
<= clock
master
.
A
C
B
D
A
C
E
B
D
X
Y
X
Y
Figure 5: TOP: peer synchronization in DiSenS . A < B <
D < C. BOTTOM: partially ordered synchronization for S
2
DB.
A < C < E, B = C, D = E. Dashed arrows indicate the
update and transmission messages. (Some update messages are
omitted)
Figure 5 illustrates the two synchronization schemes. The top
part shows DiSenS's peer synchronization scheme. Node X waits
at A. Y sends update at B and wakes X. Then Y waits at D,
waken by X's update at C. X and Y proceed in parallel afterwards.
The bottom part shows S
2
DB's partially ordered synchronization
scheme. Here Y is the master. X first waits at A. Y sends its
update at B. X receives the update and runs to the updated point,
which is C (=B). Then X waits again. When Y runs to D and
sends update. X can proceed to E (=D). If Y needs to wait to
receive, X will wake it up when X reaches E according to rule 3.
Obviously, in this scheme, X always follows Y .
Now we can give our algorithm for coordinated break. Using
Figure 4 as the example, we first designate X as the master. At
point A, X's condition is satisfied. X stops at A. Since Y and
Z follow X, they all stop at A. Then we choose the next node as
the new master, whose condition is not satisfied yet. It is Y . X
and Z follow Y until Y reaches B. Next, similarly, we choose Z
as new master. At time C, we find cond
x
cond
y
cond
z
=
true. We break the execution and C is exactly our break point. In
this algorithm, the aforementioned pre-transmission also plays an
important role in that it enables us to stop all nodes at the same time
point precisely.
Coordinated break, however, does not work with arbitrary conditions
. Consider the case where the local conditions in Figure 4
are connected by injunction instead of conjunction. The break
point now should be at time A. Since we are not able to predict
which node will first satisfy its condition, it is not possible for us
to stop all the nodes together at time A unless we synchronize all
the nodes cycle by cycle, which would limit the scalability and the
performance significantly. For the same reason, we can not set up
multiple coordinated break points. We reiterate that these limita-108
tions are a direct result of our desire to scale DiSenS and to use
S
2
DB on large-scale simulated networks. That is, we have sacri-ficed
generality in favor of the performance gained through parallel
and distributed-memory implementation.
Although the generality of coordinated break is limited, it is still
useful in many situations. For example, for a data sink application,
we may want to determine why data is lost when a surge of data
flows to the sink node. In this case, we would break the execution
of the sink node based on the condition that its neighbor nodes have
sent data to it. Then we step-execute the program running on the
sink node to determine why the data is being lost. To implement
the condition of data sent on neighbor nodes we can simply use
source code instrumentation exporting a custom debugging point.
Thus this example also illustrates how the single-device debugging
features discussed in the previous section can be integrated with the
group debugging features.
FAST TIME TRAVELING FOR REPLAYABLE DEBUGGING
Even with the ability to perform coordinated breakpoints, the
normal debugging cycle of break/step/print i s still cumbersome when
the complete sensor network is debugged, especially if the size of
network is large. The high level nature of some systematic errors
requires a global view of the interactions among sensor nodes. An
alternative model for debugging sensor networks is:
A simulation is conducted with tracing. Trace log is analyzed
to pinpoint the anomaly.
Quickly return to the point when the anomaly occurs to perform
detailed source code level debugging.
To achieve this, we need to trace the simulation and restore the
state of network at any point in the trace. The debugging points
and virtual hardware based instrumentation discussed in section 4
can be used to trace the simulation in a way similar to [23]. In this
section, we present the S
2
DB's design of fast time traveling, which
enables the restoration of network states.
The basic mechanism required to implement time traveling is
a periodic checkpoint. A checkpoint of a simulation is a complete
copy of the state of the simulated sensor network. DiSenS is
an object oriented framework in representing device components.
When a checkpoint is initiated, the state saving function is invoked
first at the highest level "machine" object. Recursively, the sub-components
in the "machine" invoke their own state saving functions
. The saved state is comprised of registers, memories (SRAM,
EEPROM, etc.) and auxiliary state variables in each component.
It also includes some simulation related states. For example, we
need to save the event queue content, the received radio byte queue
in the radio model and the status of the power model, etc. The
complete binary of the state is saved into a timestamped file. The
result checkpoint file for DiSenS has a size of
4948 bytes, mostly
comprised of SRAM (
4KB) content.
Checkpoint for the on-board flash has to be handled differently.
Motes have a
512KB flash chip used for sensor data logging and
in-network programming. If flash content is saved as other components
, the checkpoint file will be as large as over half megabyte,
which is
128 times larger than the one without flash. So if flash is
also saved in a snapshot way, it is both extremely space and time
inefficient for a large scale sensor network. We solve this problem
by saving flash operations in a log file. Since most sensor network
applications use flash infrequently and flash content is updated in
page unit, the overhead of saving log is much smaller than saving
flash snapshots. Notice that the flash buffers have to be saved in the
snapshot checkpoint file.
Once a simulation is finished, we have a set of snapshot checkpoints
and a continuous flash log. Given an arbitrary time point T ,
to restore the state of system includes the following steps:
1. Restore: find the latest checkpoint CP that is prior to T and
load the snapshot checkpoint file;
2. Replay: if flash is used, replay the flash operation log up to
CP 's time;
3. Re-run: start from CP , re-run the simulation until time T .
Checkpoints can also be initiated by methods other than the need
to take a periodic snapshot. For example, under S
2
DB a break
point can be associated with a checkpoint so that once the execution
breaks, a checkpoint is generated. Thus, a developer can move
between the checkpoints to find the exact point when error occurs
during a replayed simulation. Checkpoint can also be initiated by a
debugging point, especially custom debugging points. By allowing
checkpoint to be triggered in conjunction with debugging points
S
2
DB integrates the replay and state-saving capabilities needed to
efficiently re-examine an error condition with the execution control
over state changes.
EVALUATION
Since S
2
DB is built upon DiSenS , its performance is highly dependent
on DiSenS itself. We begin this section by focusing on the
the performance of DiSenS simulation/emulation and then show the
overhead introduced by various S
2
DB debugging facilities. All experiments
described in this section are conducted using a 16-node
cluster in which each host has dual 3.2GHz Intel Xeon processors
with 1GB memory. The hosts are connected via switched gigabit
Ethernet. To make fair comparison, we use the same sensor network
application CntToRfm for evaluation.
7.1
Performance of DiSenS
For brevity, we present only the typical simulation speed of DiSenS
on the cluster. A more thorough examination of scalability
and performance under different configurations can be found in paper
[25].
Figure 6 shows the performance achieved by DiSenS when simulating
various numbers of nodes on the cluster in both
1-D and
2-D topologies. In the figure, the X axis shows the total number
of nodes simulated. The Y -axis is the normalized simulation speed
(compared to real time speed on hardware). For the
1-D topology,
all nodes are oriented on a straight line,
50 meters apart (assuming
the maximal radio range is
60 meters). For the 2-D topology,
nodes are arranged in a square grid. Again the distance between
two nodes is
50 meters. Both performance curves are very close
except in the middle part, where
2-D topology has slightly worse
performance.
The simulation speed drops noticeably from
1 to 4 nodes but
then the speed curve keeps flat until
128 nodes are simulated. After
that, the speed decreases linearly. The transition from flat to linear
decrement is because there is not enough computing resources
within the cluster (
16 hosts).
To summarize the results from [25], DiSenS is able to simulate
one mote 9 times faster than real time speed, or 160 nodes at near
real time speed, or 2048 nodes at nearly a tenth of real time speed.
109
Total number of nodes
Normalized simulated clock speed
1
2
4
8
16
32
64
128
256
512 1024
0.01
0.10
1.00
10.00
one dimension
two dimensions
Figure 6: DiSenS simulation performance in
1-D and 2-D
topologies. X-axis is total number of nodes simulated. Y -axis
is normalized simulation speed (compared to execution speed
on real device).
7.2
Performance of a Break Condition on a
Single Device
We first evaluate the cost of monitoring debugging points in single-device
debugging. Not all the listed (in Table 1) debugging points
are evaluated since the overhead for some of them is application
dependent.
pc
memrd
memwr
power
timer
spi
Debugging Point
Relative Simulation Speed
0.0
0.2
0.4
0.6
0.8
1.0
Figure 7: Relative simulation speed for various debugging
points. X-axis shows the name of debugging points. Y -axis
is the ratio to original simulation speed (without monitoring
debugging points).
Figure 7 gives the relative simulation speed of evaluating various
debugging points. For each one, we set a break condition using
the debugging point and run the simulation. The result shows that
pc has the largest overhead since the PC change occurs for every
instruction execution. Memory related debugging points has less
overhead. Power and event-based debugging points have the least
overhead since their states change infrequently.
7.3
Performance of a Coordinated Break
Condition with Multiple Devices
We evaluate the overhead of monitoring the coordinated break
condition in this subsection. We run our experiments with a
2-D
4 4 grid of sensor nodes, distributed in 4 groups (hosts).
0.85
0.90
0.95
1.00
Involved Groups
Relative Simulation Speed
1
2
3
4
Figure 8: Relative simulation speed of monitoring a coordinated
break condition for multiple devices. X-axis is the number
of groups (hosts) involved. Y -axis is the ratio to original
simulation speed (without condition monitoring).
Figure 8 shows the speed ratio between the simulation with monitoring
and without. When the group number is
1, only nodes in
one group are involved in the break condition. For group number
2, nodes in both groups are used in the break condition, and so on.
The speed ratio curve drops when the number of groups increases.
The overhead of monitoring coordinated break condition is mostly
due to the extra synchronization cost introduced by the new partially
ordered synchronization scheme. Obviously, when more nodes
(especially remote nodes) involved, the simulation overhead is higher.
7.4
Performance of Checkpointing for Time
Traveling
We evaluate the overhead of checkpointing in four configurations
:
1 1, 4 1, 16 1 and 4 4, where x y means x nodes
per group and y groups. For each one, we vary the checkpoint interval
from
1/8 up to 4 virtual seconds.
Figure 9 shows the relative simulation speed when checkpointing
the system periodically. Naturally, the overhead increases when
checkpointing more frequently. It is hard to distinguish the single-group
curves since their differences are so small. In general, checkpointing
in multi-group simulation seems to have larger overhead
than single-group. However, the checkpoint overhead is relatively
small. All four curves lie above
96% of original simulation speed,
which translates to less than
4% of overhead. This result encourages
us to use time-traveling extensively in debugging. Developers
thus can always return to the last break point or a previous trace
point with little cost.
To summarize, we find that most of the new debugging facilities
we have introduced with S
2
DB have small overhead (less than
10%). As a result, we are able to debug sensor network applications
using tools that operate at different levels of abstraction while
preserving the high performance and scalability provided by DiSenS
.
110
0.96
0.98
1.00
1.02
1.04
Checkpoint Interval in Virtual Time (second)
Relative Simulation Speed
1/8
1/4
1/2
1
2
4
1x1
4x1
16x1
4x4
Figure 9: Relative simulation speed for checkpointing. X-axis
is the interval between two checkpoints (in terms of virtual
clock time of mote device). Y -axis is the ratio to original simulation
speed (without checkpointing).
CONCLUSION
S
2
DB is an efficient and effective sensor network debugger based
on DiSenS, a scalable distributed sensor network simulator. S
2
DB
makes four innovations to the conventional debugging scheme at
different levels of abstraction. For effective debugging of single
sensor devices, debugging points are introduced for the interrogation
of all interested subsystem states in a sensor device. To facilitate
source level tracing and instrumentation, we extend the simulated
sensor device hardware with a set of virtual registers providing
a way for the communication between simulator and simulated
program. At the multi-device level, we discuss the implementation
of coordinated break condition in the distributed framework.
This new type of break condition enables coordinated parallel execution
control of multiple sensor devices. A time traveling facility
is introduced for the network level debugging, used for rapid error
site restoration when working with sensor network trace analysis
. Overall, these debugging features impose overhead of less than
10% (generally) to DiSenS, and thus enable efficient debugging of
large scale sensor networks.
S
2
DB is still an ongoing project that we think to make it a comprehensive
debugging tool for sensor networks, there is still a lot of
work to do. The most imperative task is to design and implement a
graphic user interface for intuitive and productive debugging. We
are planning to build a plugin in the famous Eclipse [5] development
environment, which controls the debugging and simulation
functions in S
2
DB and DiSenS. We are also interested in incorporating
the debugging needs according to people's experiences in
sensor network development and discovering new debugging techniques
, especially at the network level.
REFERENCES
[1] Atmel. AVR JTAG ICE User Guide. 2001. http://www.atmel.
com/dyn/resources/prod documents/DOC2475.PDF
.
[2] Atmel's AVR JTAG ICE. http://www.atmel.com/dyn/
products/tools card.asp?tool id=2737
.
[3] C. Buschmann, D. Pfisterer, S. Fischer, S. P. Fekete, and A. Kroller.
SpyGlass: taking a closer look at sensor networks. In the Proceedings
of the 2nd international conference on Embedded networked sensor
systems, pages 301302, 2004. New York, NY, USA.
[4] A. Chlipala, J. W. Hui, and G. Tolle. Deluge: Dissemination
Protocols for Network Reprogramming at Scale. Fall 2003 UC
Berkeley class project paper, 2003.
[5] Eclipse: an extensible development platform and application
frameworks for building software. http://www.eclipse.org.
[6] L. Girod, J. Elson, A. Cerpa, T. Stathopoulos, N. Ramanathan, and
D. Estrin. EmStar: a Software Environment for Developing and
Deploying Wireless Sensor Networks. USENIX Technical
Conference, 2004.
[7] B. Hendrickson and R. Leland. The Chaco User's Guide: Version
2.0. Technical Report SAND942692, Sandia National Lab, 1994.
[8] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and K. Pister.
System architecture directions for network sensors. International
Conference on Architectural Support for Programming Languages
and Operating Systems, Oct. 2000.
[9] iPAQ devices. http://welcome.hp.com/country/us/en/
prodserv/handheld.html
.
[10] Boundary-Scan (JTAG) test and in-system programming solutions
(IEEE 1149.1). http://www.jtag.com/main.php.
[11] S. T. King, G. W. Dunlap, and P. M. Chen. Debugging Operating
Systems with Time-Traveling Virtual Machines. In the Proceedings
of USENIX Annual Technical Conference 2005, Apr. 2005. Anaheim,
CA.
[12] O. Landsiedel, K. Wehrle, and S. Gtz. Accurate Prediction of Power
Consumption in Sensor Networks. In Proceedings of The Second
IEEE Workshop on Embedded Networked Sensors (EmNetS-II), May
2005. Sydney, Australia.
[13] P. Levis, N. Lee, M. Welsh, and D. Culler. TOSSIM: Accurate and
Scalable Simulation of Entire TinyOS Applications. ACM
Conference on Embedded Networked Sensor Systems, Nov. 2003.
[14] S. R. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong. The
Design of an Acquisitional Query Processor for Sensor Networks. In
Proceedings of SIGMOD 2003, June 2003.
[15] Mote hardware platform.
http://www.tinyos.net/scoop/special/hardware
.
[16] MOTE-VIEW Monitoring Software. http://www.xbow.com/
Products/productsdetails.aspx?sid=88
.
[17] J. Polley, D. Blazakis, J. McGee, D. Rusk, and J. S. Baras. ATEMU:
A Fine-grained Sensor Network Simulator. IEEE Communications
Society Conference on Sensor and Ad Hoc Communications and
Networks, 2004.
[18] N. Ramanathan, K. Chang, R. Kapur, L. Girod, E. Kohler, and
D. Estrin. Sympathy for the Sensor Network Debugger. In the
Proceedings of 3rd ACM Conference on Embedded Networked
Sensor Systems (SenSys '05), Nov. 2005. San Diego, California.
[19] N. Ramanathan, E. Kohler, and D. Estrin. Towards a debugging
system for sensor networks. International Journal of Network
Management, 15(4):223234, 2005.
[20] S. M. Srinivasan, S. Kandula, C. R. Andrews, and Y. Zhou.
Flashback: A Lightweight Extension for Rollback and Deterministic
Replay for Software Debugging. In the Proceedings of USENIX
Annual Technical Conference 2004, June 2004. Boston, MA.
[21] Stargate: a platform X project.
http://platformx.sourceforge.net/
.
[22] Surge Network Viewer. http://xbow.com/Products/
productsdetails.aspx?sid=86
.
[23] B. Titzer and J. Palsberg. Nonintrusive Precision Instrumentation of
Microcontroller Software. In the Proceedings of ACM
SIGPLAN/SIGBED 2005 Conference on Languages, Compilers, and
Tools for Embedded Systems (LCTES'05), June 2005. Chicago,
Illinois.
[24] Y. Wen, S. Gurun, N. Chohan, R. Wolski, and C. Krintz. SimGate:
Full-System, Cycle-Close Simulation of the Stargate Sensor Network
Intermediate Node. In Proceedings of International Conference on
Embedded Computer Systems: Architectures, MOdeling, and
Simulation (IC-SAMOS), 2006. Samos, Greece.
[25] Y. Wen, R. Wolski, and G. Moore. DiSenS: Scalable Distributed
Sensor Network Simulation. Technical Report CS2005-30,
University of California, Santa Barbara, 2005.
111
| Sensor Network;Simulation;Debugging |
172 | Scalable Data Aggregation for Dynamic Events in Sensor Networks | Computing and maintaining network structures for efficient data aggregation incurs high overhead for dynamic events where the set of nodes sensing an event changes with time. Moreover, structured approaches are sensitive to the waiting-time which is used by nodes to wait for packets from their children before forwarding the packet to the sink. Although structure-less approaches can address these issues, the performance does not scale well with the network size. We propose a semi-structured approach that uses a structure-less technique locally followed by Dynamic Forwarding on an implicitly constructed packet forwarding structure to support network scalability. The structure, ToD, is composed of multiple shortest path trees. After performing local aggregation , nodes dynamically decide the forwarding tree based on the location of the sources. The key principle behind ToD is that adjacent nodes in a graph will have low stretch in one of these trees in ToD, thus resulting in early aggregation of packets. Based on simulations on a 2000 nodes network and real experiments on a 105 nodes Mica2-based network, we conclude that efficient aggregation in large scale networks can be achieved by our semi-structured approach. | Introduction
Data aggregation is an effective technique for conserving
communication energy in sensor networks. In sensor
networks, the communication cost is often several orders of
magnitude larger than the computation cost. Due to inherent
redundancy in raw data collected from sensors, in-network
data aggregation can often reduce the communication cost
by eliminating redundancy and forwarding only the extracted
information from the raw data. As reducing consumption
of communication energy extends the network lifetime, it is
critical for sensor networks to support in-network data aggregation
.
Various data aggregation approaches have been proposed
for data gathering applications and event-based applications.
These approaches make use of cluster based structures [1, 2]
or tree based structures [38]. In data gathering applications,
such as environment and habitat monitoring [912], nodes
periodically report the sensed data to the sink. As the traffic
pattern is unchanging, these structure-based approaches
incur low maintenance overhead and are therefore suitable
for such applications. However, in event-based applications,
such as intrusion detection [13, 14] and biological hazard detection
[15], the source nodes are not known in advance.
Therefore the approaches that use fixed structures can not
efficiently aggregate data, while the approaches that change
the structure dynamically incur high maintenance overhead
[4, 5]. The goal of this paper is to design a scalable and efficient
data aggregation protocol that incurs low maintenance
overhead and is suited for event-based applications.
Constructing an optimal structure for data aggregation for
various aggregation functions has been proven to be an NP-hard
problem [16, 17]. Although heuristics can be used to
construct structures for data aggregation, another problem
associated with the convergecast traffic pattern, where nodes
transmit their packets to the cluster-head or parent in cluster
or tree structures, results in low performance of structure
based data aggregation protocols. In [18] the simulation
results show that the packet dropping rate in Shortest Path
Tree (SPT) is higher because of heavy contention caused by
the convergecast traffic. This results in more packet drops
and increased delays. As a result, enforcing a fixed order
of packet transmissions becomes difficult, which impacts the
performance of data aggregation in structured approaches.
Typically, packets have to be transmitted in a fixed order
181
from leaves to the root in a tree-like structure to achieve maximum
aggregation. Dropped packets not only make the optimal
structure sub-optimal, but also waste energy on transmitting
packets that are unable to reach the sink.
In [19] it shows that the performance gain by using heuristics
to create the Steiner Minimum Tree (SMT) for aggregation
is not significant compared with using only the Shortest
Path Tree (SPT), not to mention that the overhead of
constructing such a structure may negate the benefit resulting
from data aggregation. However, their conclusions were
based on the assumption of randomly located data sources,
which is different from the scenarios in event-based sensor
networks where a set of close-by nodes is expected to sense
an event.
Realizing the shortcomings of structured approaches, [20]
proposes an anycast based structure-less approach at the
MAC layer to aggregate packet. It involves mechanisms
to increase the chance of packets meeting at the same node
(Spatial Aggregation) at the same time (Temporal Aggregation
). As the approach does not guarantee aggregation of all
packets from a single event, the cost of forwarding unaggregated
packets increases with the scale of the network and the
distance of the event from the sink.
To benefit from the strengths of the structured and the
structure-less approaches, we propose a semi-structured approach
in this paper. The main challenge in designing such
a protocol is to determine the packet forwarding strategy in
absence of a pre-constructed global structure to achieve early
aggregation. Our approach uses a structure-less technique locally
followed by Dynamic Forwarding on an implicitly constructed
packet forwarding structure to support network scalability
. The structure, ToD (Tree on Directed acyclic graph),
is composed of multiple shortest path trees. After performing
local aggregation, nodes dynamically decide the forwarding
tree based on the location of the source nodes. The key principle
behind ToD is that adjacent nodes in a graph will have
low stretch in at least one of these trees in ToD, thus resulting
in early aggregation of packets. This paper makes the
following contributions:
We propose an efficient and scalable data aggregation
mechanism that can achieve early aggregation without
incurring any overhead of constructing a structure.
We implement the ToD approach on TinyOS and compare
its performance against other approaches on a 105
nodes sensor network.
For studying the scalability aspects of our approach, we
implement ToD in the ns2 simulator and study its performance
in networks of up to 2000 nodes.
The organization of the rest of the paper is as follows.
Section 2 presents background and related work. Section
3 presents the structure-less approach. Section 4 analyzes
the performance of ToD in the worst case. The performance
evaluation of the protocols using simulations and experiments
is presented in Section 5. Finally Section 6 concludes
the paper.
Related Work
Data aggregation has been an active research area in sensor
networks for its ability to reduce energy consumption.
Some works focus on how to aggregate data from different
nodes [2124], some focus on how to construct and maintain
a structure to facilitate data aggregation [18,17,2530],
and some focus on how to efficiently compress and aggregate
data by taking the correlation of data into consideration
[17, 3134]. As our work focuses on how to facilitate
data aggregation without incurring the overhead of constructing
a structure, we briefly describe the structure-based
as well as structure-less approaches in current research.
In [1,2], the authors propose the LEACH protocol to cluster
sensor nodes and let the cluster-heads aggregate data. The
cluster-heads then communicate directly with the base station
. PEGASIS [26] extends LEACH by organizing all nodes
in a chain and letting nodes be the head in turn. [26, 27] extend
PEGASIS by allowing simultaneous transmission that
balances the energy and delay cost for data gathering. Both
LEACH and PEGASIS assume that any node in the network
can reach the base-station directly in one-hop, which limits
the size of the network for them to be applicable.
GIT [3] uses a different approach as compared to LEACH.
GIT is built on top of a routing protocol, Directed Diffusion
[21,22], which is one of the earliest proposed attribute-based
routing protocols. In Directed Diffusion, data can be aggregated
opportunistically when they meet at any intermediate
node. Based on Directed Diffusion, the Greedy Incremental
Tree (GIT) establishes an energy-efficient tree by attaching
all sources greedily onto an established energy-efficient path
and pruning less energy efficient paths. However due to the
overhead of pruning branches, GIT might lead to high cost
in moving event scenarios.
In [4, 5], the authors propose DCTC, Dynamic Convoy
Tree-Based Collaboration, to reduce the overhead of tree migration
in mobile event scenarios. DCTC assumes that the
distance to the event is known to each sensor and uses the
node near the center of the event as the root to construct and
maintain the aggregation tree dynamically. However it involves
heavy message exchanges which might offset the benefit
of aggregation in large-scale networks. From the simulation
results in DCTC [5], the energy consumption of tree
expansion, pruning and reconfiguration is about 33% of the
data collection.
In [8], the authors propose an aggregation tree construction
algorithm to simultaneously approximate the optimum
trees for all non-decreasing and concave aggregation functions
. The algorithm uses a simple min-cost perfect matching
to construct the tree. [7] also uses similar min-cost matching
process to construct an aggregation tree that takes the data
fusion cost into consideration. Other works, such as SMT
(Steiner Minimum Tree) and MST (Multiple Shared Tree)
for multicast algorithms which can be used in data aggregation
[17, 19, 30], build a structure in advance for data aggregation
. In addition to their complexity and overhead, they
are only suitable for networks where the sources are known
in advance. Therefore they are not suitable for networks with
mobile events.
Moreover, fixed tree structure might have long stretch between
adjacent nodes. A stretch of two nodes u and v in a
tree T on a graph G is the ratio between the distance from
node u to v in T and their distance in G.
Long stretch
182
implies packets from adjacent nodes have to be forwarded
many hops away before they can be aggregated. This problem
has been studied as MSST (Minimum Stretch Spanning
Tree) [35] and MAST (Minimum Average Stretch Spanning
Tree) [36]. They are also NP-hard problems, and it has
been shown that for any graph, the lower bound of the average
stretch is O
(log(n)) [36], and it can be as high as
O
(n) for the worst case [37]. Even for a grid network, it
has been shown that the lower bound for the worst case is
O
(n) [36]. [38] proposes a polynomial time algorithm to
construct a group-independent spanning tree that can achieve
O
(log(n)) stretch. However the delay in [38] is high in large
networks if only nodes near the sink are triggered.
[20] is the first proposed structure-less data aggregation
protocol that can achieve high aggregation without incurring
the overhead of structure approaches. [20] uses anycast to
forward packets to one-hop neighbors that have packets for
aggregation. It can efficiently aggregate packets near the
sources and effectively reduce the number of transmissions.
However, it does not guarantee the aggregation of all packets
from a single event. As the network grows, the cost of
forwarding packets that were unable to be aggregated will
negate the benefit of energy saving resulted from eliminating
the control overhead.
In order to get benefit from structure-less approaches even
in large networks, scalability has to be considered in the design
of the aggregation protocol. In this paper, we propose
a scalable structure-less protocol, ToD, that can achieve efficient
aggregation even in large networks. ToD uses a semi-structure
approach that does not have the long stretch problem
in fixed structure nor incur structure maintenance overhead
of dynamic structure, and further improves the performance
of the structure-less approach.
Scalable Data Aggregation
As described before, the goal of our protocol is to achieve
aggregation of data near the sources without explicitly constructing
a structure for mobile event scenarios. Aggregating
packets near the sources is critical for reducing the number of
transmissions. Aggregating without using an explicit structure
reduces the overhead of construction and maintenance
of the structure. In this section, we propose a highly scalable
approach that is suitable for very large sensor networks.
Our protocol is based on the Data Aware Anycast (DAA)
and Randomized Waiting (RW) approaches
1
proposed in
[20]. There are two phases in our protocol: DAA and Dynamic
Forwarding. In the first phase, packets are forwarded
and aggregated to a selected node, termed aggregator, using
DAA. In DAA [20], packets were destined to the sink,
whereas in our approach they are destined to an aggregator.
In the second phase, the leftover un-aggregated or partially
aggregated packets are forwarded on a structure, termed Tree
on DAG (ToD), for further aggregation. First we briefly describe
the DAA protocol proposed in [20].
3.1
Data Aware Anycast [20]
Data Aware Anycast is a structure-less protocol that aggregates
packets by improving the Spatial and Temporal con-1
In rest of this paper, we use DAA or Data Aware Anycast to
refer to the combination of the two approaches.
vergence. Spatial convergence and temporal convergence
during transmission are two necessary conditions for aggregation
. Packets have to be transmitted to the same node
at the same time to be aggregated. Structured approaches
achieve these two conditions by letting nodes transmit packets
to their parents in the aggregation tree and parents wait
for packets from all their children before transmitting the aggregated
packets. Without explicit message exchanges in
structure-less aggregation, nodes do not know where they
should send packets to and how long they should wait for
aggregation. Therefore improving spatial or temporal convergence
is critical for improving the chance of aggregation.
Spatial Convergence is achieved by using anycast to forward
packets to nodes that can achieve aggregation. Anycast
is a routing scheme whereby packets are forwarded to
the best one, or any one, of a group of target destinations
based on some routing metrics. By exploiting the nature
of wireless radio transmission in sensor networks where all
nodes within the transmission range can receive the packet,
nodes are able to tell if they can aggregate the transmitting
packet, and the anycast mechanism allows the sender to forward
packets to any one of them. Transmitting packets to
nodes that can achieve aggregation reduces the number of
remaining packets in the network, thereby reducing the total
number of transmissions.
Temporal Convergence is used to further improve the aggregation
. Randomized Waiting is a simple technique for
achieving temporal convergence, in which nodes wait for a
random delay before transmitting. In mobile event triggered
networks, nodes are unable to know which nodes are triggered
and have packets to transmit in advance. Therefore
nodes can not know if they should wait for their upstream
nodes and how long they should wait for aggregation. A
naive approach of using a fixed delay depending on the distance
to the sink may make the detection delay very high.
For example, as shown in Fig. 1, nodes closer to the sink
must wait longer for packets from possible upstream nodes
if fixed waiting time is employed. When events are closer to
the sink, the longer delay chosen by nodes closer to the sink
is unnecessary. Random delay is used to avoid long delay in
large networks while increasing the chance of aggregation.
sink
......
......
= 0
= 1
= n-2
= n-1
= n
nodes triggered by an event
Figure 1.
Longer delay is unnecessary but is inevitable using fixed
delay when the event is closer to the sink. Nodes closer to the sink have
longer delay (
) because they have to wait for packets from possible
upstream nodes.
When a node detects an event and generates a packet for
reporting, it picks a random delay between 0 and
before
transmitting, where
is a network parameter that specifies
the maximum delay. After delaying the packet, the node
183
broadcasts an RTS packet containing an Aggregation ID.
In [20], the timestamp is used as the Aggregation ID, which
means that packets generated at the same time can be aggregated
. When a node receives an RTS packet, it checks if it
has packets with the same Aggregation ID. If it does, it has
higher priority for replying with a CTS than nodes that do
not have packets for aggregation. The priority is decided by
the delay of replying a CTS packet. Nodes with higher priority
reply a CTS with shorter delay. If a node overhears any
traffic before transmitting its CTS packet, it cancels the CTS
transmission in order to avoid collision of multiple CTS responses
at the sender. Therefore, nodes can send their packets
for aggregation as long as at least one of its neighbors has
a packet with the same Aggregation ID. More details and extensions
of the DAA approach can be found in [20].
However, DAA can not guarantee that all packets will be
aggregated into one packet. When more packets are transmitted
from sources to the sink without aggregation, more
energy is wasted. This effect becomes more severe when the
network is very large and the sources are very far away from
the sink. Therefore, instead of forwarding packets directly to
the sink when DAA can not aggregate packets any more, we
propose the use of Dynamic Forwarding for further packet
aggregation. We now describe the Dynamic Forwarding and
the construction of ToD.
3.2
Dynamic Forwarding over ToD
sink
nodes triggered by event B
nodes triggered by event A
Figure 2.
Fixed tree structure for aggregation can have long distance
(link-stretch) between adjacent nodes, as in the case of nodes triggered
by event B. In this example we assume that nodes in the range of event
B are within transmission range of each other.
We adopt the pre-constructed structure approach in the
second phase to achieve further aggregation. Having a structure
to direct all packets to a single node is inevitable if
we want to aggregate all packets into one. Constructing a
structure dynamically with explicit message exchanges incurs
high overhead. Therefore we use an implicitly computed
pre-constructed structure that remains unchanged for
relatively long time periods (several hours or days). However
, using a fixed structure has the long stretch problem as
described in Section 2. Take Fig. 2 as an example of pre-computed
tree structure where gray nodes are the sources.
The fixed tree structure works well if the nodes that generate
packets are triggered by event A because their packets can be
aggregated immediately on the tree. However, if the nodes
that generate packets are triggered by event B, their packets
can not be aggregated even if they are adjacent to each other.
Therefore we design a dynamic forwarding mechanism over
ToD, to avoid the problem of long stretch.
3.2.1
ToD in One Dimensional Networks
......
........................
........................
......
network
one row instance of the network
sink
Figure 3.
We illustrate the ToD construction from one row's point of
view to simplify the discussion.
For illustrating the concept of ToD, we first describe the
construction of ToD for a 1-D (a single row of nodes) network
, as shown in Fig. 3. We assume that the nodes can communicate
with their adjacent nodes in the same row through
one hop.
We define a cell as a square with side length
where is
greater than the maximum diameter of the area that an event
can span. The network is divided into cells. These cells are
grouped into clusters, called F-clusters (First-level clusters).
The size of the F-clusters must be large enough to cover the
cells an event can span, which is two when we only consider
1-D cells in the network. All nodes in F-clusters send their
packets to their cluster-heads, called F-aggregators (First-level
aggregators). Note that nodes in the F-cluster can be
multiple hops away from the F-aggregator. The formation
of the clusters and the election of the aggregators are discussed
later in Section 3.2.3. Each F-aggregator then creates
a shortest path to the sink. Therefore the structure is a shortest
path tree where the root is the sink and the leaves are
F-aggregators. We call this tree an F-Tree. Fig. 4(a) shows
the construction of the F-Tree.
In addition to the F-clusters, we create the second type
of clusters, S-clusters (Second-level clusters) for these cells.
The size of an S-cluster must also be large enough to
cover all cells spanned by an event, and it must interleave
with the F-clusters so it can cover adjacent cells in different
F-clusters. Each S-cluster also has a cluster-head, S-aggregator
, for aggregating packets. Each S-aggregator creates
a shortest path to the sink, and forms a second shortest
path tree in the network. We call it S-Tree. The illustration of
an S-Tree is shown in Fig. 4(b). For all sets of nearby cells
that can be triggered by an event, either they will be in the
same F-cluster, or they will be in the same S-cluster. This
property is exploited by Dynamic Forwarding to avoid the
long stretch problem discussed earlier.
After the S-Tree is constructed, the F-aggregators connect
themselves to the S-aggregators of S-clusters which its
F-cluster overlaps with, as shown in Fig. 4(c). For example,
in Fig. 4(c), the F-aggregator F4 connects to S-aggregators
S3 and S4 because its F-cluster overlaps with S-cluster 3 and
4. Thus, the combination of F-Tree and S-Tree creates a Di-184
A B
C D
F1
F2
S2
F4
S4
F6
S6
F8
F3
F5
F7
S1
S3
S5
S7
Cells
Other nodes in the
network
F-Aggregators
Cells with packets
F1
F2 F4 F6
F8
F3 F5 F7
F-Tree S-Tree
Overlapping
ToD
(a)
(b)
(c)
A B
C D
F-clusters
A
B
C
D
S-clusters
S-Aggregators
S2
S4 S6
S1
S3
S5
S7
Figure 4.
The construction of F-Tree, S-Tree, and ToD. (a) Leaf nodes are cells. Pairs of neighbor cells define F-clusters. Each F-cluster has an
F-aggregator, and F-aggregators form the F-Tree. (b) Each pair of adjacent cells not in the same F-cluster form an S-cluster. Each S-cluster has an
S-aggregator, and S-aggregators form the S-Tree. Nodes on the network boundary do not need to be in any S-cluster. (c) Each F-aggregator connects
to two S-aggregators of S-clusters which its F-cluster overlaps with. This structure called the Tree on DAG or ToD. F-aggregator in ToD uses Dynamic
Forwarding to forward packets to the root, or through an S-aggregator in the S-Tree based on where the packets come from.
rected Acyclic Graph, which we refer to as the ToD (Tree on
DAG).
Nodes first use the Data Aware Anycast (DAA) approach
to aggregate as many packets as possible. When no further
aggregation can be achieved by DAA, nodes forward their
packets to the F-aggregator in its F-cluster. If an event only
triggers nodes within a single F-cluster, its packets can be
aggregated at the F-aggregator, and be forwarded to the sink
using the F-Tree. However, in case the event spans multiple
F-clusters, the corresponding packets will be forwarded to
different F-aggregators. As we assumed that the event size
is not larger than the size of a cell, an event on the boundary
of F-clusters will only trigger nodes in cells on the boundary
of the F-clusters. By the construction of S-clusters, adjacent
cells on the boundary of F-clusters belong to the same S-cluster
. Thus, F-aggregators can exploit the information collected
from received packets to select the S-aggregator that
is best suited for further aggregation. This information is obtained
from the source of traffic that can be encoded in the
packets. Often such information is readily available in the
packet. Otherwise, 4 extra bits can be used to indicate which
cell the packet comes from.
Consider the example in Fig. 4(c). Since the maximum
number of cells an event can span is two, either these two
cells are in the same F-cluster, or they are in the same S-cluster
. If they are in the same F-cluster, their packets can
be aggregated at the F-aggregator. For example, if the event
spans A and B, F1 knows that no other F-cluster has packets
for aggregation, and it can forward the packets using
the F-Tree. If the event spans two cells that are in different
F-clusters, the two F-aggregators in the two F-clusters
will receive packets only from one of their cells. The F-aggregators
then conjecture which F-cluster might also have
packets based on which cells the packets come from. For example
, if the event spans C and D, F4 will only receive packets
from C. Therefore F4 can know either the event happens
only in C, or the event spans C and D. Consequently, F4 can
forward packets to S4, the S-aggregator of its overlapped S-clusters
covering C. Also F5 will forward its packets to S4 if
packets only come from D. Therefore these packets can be
aggregated at S4.
Note that we do not specifically assign cells on the boundary
of the network to any S-cluster. They do not need to be in
any S-cluster if they are not adjacent to any other F-cluster,
or they can be assigned to the same S-cluster as its adjacent
cell.
The ToD for the one dimensional network has the
following property.
Property 1. For any two adjacent nodes in ToD in one dimensional
network, their packets will be aggregated either
at a first level aggregator, or will be aggregated at a second
level aggregator.
Proof. There are only three possibilities when an event triggers
nodes to generate packets. If only nodes in one cell are
triggered and generate the packets, their packets can be aggregated
at one F-aggregator since all nodes in a cell reside
185
in the same F-cluster, and all packets in an F-cluster will be
aggregated at the F-aggregator.
If an event triggers nodes in two cells, and these two cells
are in the same F-cluster, the packets can be aggregated at
the F-aggregator as well.
If an event triggers nodes in two cells, but these two
cells are in different F-clusters, they must reside in the same
S-cluster because S-clusters and F-clusters are interleaved.
Moreover, packets in one F-cluster will only originate from
the cell that is closer to the other F-cluster that also has packets
. Therefore the F-aggregator can forward packets to the
S-aggregator for aggregation accordingly, and packets will
be aggregated at the S-aggregator.
Since the cell is not smaller than the maximum size of an
event, it is impossible for an event to trigger more than two
cells, and this completes the proof.
3.2.2
ToD in Two Dimensional Networks
Section 3.2.1 only demonstrates the construction for one
row of nodes to illustrate the basic idea of dynamic forwarding
, and it works because each cell is only adjacent to one (or
none, if the cell is on the boundary of the network) of the F-clusters
. Therefore if an event spans two cells, the two cells
are either in the same F-cluster or in the same S-cluster, and
the F-aggregator can conjecture whether to forward the packets
to the S-aggregator, or to the sink directly. When we consider
other cells and F-clusters in the adjacent row, a cell on
the boundary of an F-cluster might be adjacent to multiple F-clusters
. If an event spans multiple cells, each F-aggregator
may have multiple choices of S-aggregators if the cells in
their F-cluster are adjacent to multiple F-clusters. If these F-aggregators
select different S-aggregators, their packets will
not be aggregated. However, the ideas presented in 1D networks
can be extended for the 2D networks. But instead of
guaranteeing that packets will be aggregated within two steps
as in the 1D case (aggregating either at an F-aggregator or an
S-aggregator), the ToD in 2D guarantees that the packets can
be aggregated within three steps.
We first define the cells and clusters in two dimensions.
For the ease of understanding, we use grid clustering to illustrate
the construction. As defined before, the size of a cell is
not less than the maximum size of an event, and an F-cluster
must cover all the cells that an event might span, which is
four cells in 2D grid-clustering. Therefore the entire network
is divided into F-clusters, and each F-cluster contains
four cells. The S-clusters have to cover all adjacent cells in
different F-clusters. Each F-cluster and S-cluster also has
a cluster-head acting as the aggregator to aggregate packets
. Fig. 5 shows a 5
5 network with its F-clusters and
S-clusters.
Since the size of a cell (one side of the square cell) must
be greater or equal to the maximum size of an event (diameter
of the event), an event can span only one, two, three, or
four cells as illustrated in Fig. 6. If the event only spans cells
in the same F-cluster, the packets can be aggregated at the
F-aggregator. Therefore we only consider scenarios where
an event spans cells in multiple F-clusters.
(a) F-clusters
(c) S-cluters
A B
C
D
(b) Cells
G H
I
E
F
C1
A4 B3
B1 C2
A3
A1 A2
B2
B4 C3 C4
D3
D1 D2
D4 E3
E1 E2
E4 F3
F1 F2
F4
G3
G1 G2
G4 H3
H1 H2
H4 I3
I1 I2
I4
S1 S2
S3 S4
C1
A4
B3
B1 C2
A3
A1
A2
B2
B4 C3 C4
D3
D1
D2
D4
E3
E1 E2
E4 F3
F1 F2
F4
G3
G1 G2
G4 H3
H1 H2
H4 I3
I1 I2
I4
2
2
2
Figure 5.
Grid-clustering for a two-dimension network. (a) The network
is divided into 5
5 F-clusters. (b) Each F-cluster contains four
cells. For example the F-cluster A in (a) contains cell A1, A2, A3, and A4.
(c) The S-clusters have to cover all adjacent cells in different F-clusters.
Each S-cluster contains four cells from four different F-clusters.
Figure 6.
The possible numbers of cells an event may span in 2
2
cells, which are one, two, three, and four from left to right. The four
cells in each case are any instance of four cells in the network. They
may be in the same F-cluster or different F-clusters.
Fig. 7 shows four basic scenarios that an F-aggregator
may encounter when collecting all packets generated in its
F-cluster. All other scenarios are only different combinations
of these four scenarios. If packets originate from three
or four cells in the same F-cluster, the F-aggregator knows
that no other nodes in other F-clusters have packets, and it
can forward the packets directly to the sink. If only one or
two cells generate packets, it is possible that other F-clusters
also have packets. We assume that the region spanned by an
event is contiguous. So simultaneous occurrence of scenarios
of (a) and (c), or (b) and (d), is impossible in the F-cluster.
However, such scenarios are possible in presence of losses in
a real environment where packets from third or fourth cluster
are lost. In such cases the F-aggregator can just forward the
packets directly to the sink because no other F-cluster will
have packets from the same event.
Figure 7.
All possible scenarios in an F-aggregator's point of view.
Each case shows 3
3 F-clusters, and the aggregator of the center F-cluster
is making the decision. The dark grayed squares are cells that
generate packets, and the light grayed squares represent the corresponding
S-cluster of the dark grayed cells.
When the F-aggregator collects all packets within its cluster
, it knows which cells the packets come from and forwards
the packets to best suited S-aggregator for further aggregation
. For example, if the packets only come from one cell
as in case (a) in Fig. 7, the F-aggregator can forward the
packet to the S-aggregator of the S-cluster that covers that
186
cell. However, if packets come from two cells in an F-cluster,
the two cells must be in different S-clusters. For example,
as in Fig. 8 where the F-aggregator of F-cluster X receives
packets from two cells, is the combination of case (a) and (b)
in Fig. 7. It is possible that the F-aggregator of F-cluster Y
may receive packets from cells as in Fig. 7 (c), (d), or both.
Since the F-aggregator of F-cluster X does not know which
case the F-aggregator of F-cluster Y encounters, it does not
know which S-aggregator to forward packets to. To guarantee
the aggregation, the F-aggregator of F-cluster X forwards
the packet through two S-aggregators that covers cell C1 and
C2, therefore packets can meet at least at one S-aggregator.
If both F-aggregators receive packets from two cells in its
cluster, to guarantee that the packets can meet at least at
one S-aggregator, these two F-aggregators must select the
S-aggregator deterministically. The strategy is to select the
S-aggregator that is closer to the sink. If the packets meet
at the first S-aggregator, it does not need to forward packets
to the second S-aggregator. The S-aggregator only forwards
packets to the second S-aggregator if the packets it received
only come from two cells in one F-cluster. We will present a
simplified construction later (in Section 3.2.3) for the selection
of S-aggregators.
F-cluster X
F-cluster Y
S-cluster I
S-cluster II
C1 C2
C3
Figure 8.
The F-aggregators have two choices for S-aggregators if
they receive packets from two cells.
To guarantee that the packets can meet at least at one S-aggregator
, the second S-aggregator must wait longer than
the first S-aggregator. Therefore, if the S-aggregator receives
packets from only one cell, it waits longer to wait for possible
packets forwarded by the other S-aggregator because it could
be the second S-aggregator of the other F-aggregator. Fig. 9
shows an example of one F-aggregator sending packets to the
first S-aggregator and then the second S-aggregator, while
the other F-aggregator sends packets directly to the second
S-aggregator. As long as the second S-aggregator waits suf-ficiently
longer than the first S-aggregator the packets can be
aggregated at the second S-aggregator.
F-aggregators
1
st
S-aggregators
2
nd
S-aggregators
Figure 9.
Depending on how many cells generate packets in its F-cluster
, one F-aggregator sends packets to two S-aggregators while the
other F-aggregator sends packets to only one S-aggregator. We assume
that the sink is located at bottom-left of the network.
The ToD for the two dimension networks has the following
property.
Property 2. For any two adjacent nodes in ToD, their packets
will be aggregated at the F-aggregator, at the 1
st
S-aggregator
, or at the 2
nd
S-aggregator.
Proof. First we define the F-aggregator X as the aggregator
of F-cluster X and S-aggregator I as the aggregator of S-cluster
I, and so forth.
For packets generated only in one F-cluster, their packets
can be aggregated at the F-aggregator since all packets in the
F-cluster will be sent to the F-aggregator.
If an event triggers nodes in different F-clusters, there are
only three cases. First, only one cell in each F-cluster generates
packets. In this case, all cells having packets will be
in the same S-cluster since the adjacent cells in different F-clusters
are all in the same S-cluster. Therefore their packets
can be aggregated at the S-aggregator.
Second, the event spans three cells, C1, C2, and C3, and
two of them are in one F-cluster and one of them is in the
other F-cluster. Without loss of generality, we assume that
C1 and C2 are in the same F-cluster, F-cluster X , and C3
is in the other F-cluster, F-cluster Y . Moreover C3 must be
adjacent to either C1 or C2, and let us assume that it is C2.
From the ToD construction we know that C2 and C3 will
be in the same S-cluster, S-cluster II, and C1 will be in another
S-cluster, S-cluster I. Fig. 8 illustrates one instance
of this case. First the F-aggregator X will aggregate packets
from C1 and C2 because they are in the same F-cluster,
and forward the aggregated packets through S-aggregator I
to S-aggregator II, or the other way around, because C1
is in S-cluster I and C2 is in S-cluster II. F-aggregator Y
will aggregate packets from C3 and forward packets to S-aggregator
II because C3 is in S-cluster II. Because packets
of F-aggregator Y only come from C3, they will have
longer delay in S-aggregator II in order to wait for packets
being forwarded through the other S-aggregator. In the mean
time, if F-aggregator X forwards packets to S-aggregator II
first, the packets can be aggregated at S-aggregator II. If
F-aggregator X forwards packets to S-aggregator I first, S-aggregator
I will forward packets to S-aggregator II with
shorter delay because the packets come from two cells in
one F-cluster, therefore their packets can also be aggregated
at S-aggregator II.
In the third case, the event spans four cells. Two of them
will be in one F-cluster and the other two will be in the other
F-cluster. Without loss of generality, we can assume that
cells C1 and C2 are in F-cluster X and cells C3 and C4 are
in F-cluster Y , and C1 and C3 are adjacent, C2 and C4 are
adjacent. From the ToD construction, C1 and C3 will be
in one S-cluster, S-cluster I, and C2 and C4 will be in the
other S-cluster, S-cluster II. Because from S-aggregator I
and II, F-aggregator X and Y choose one that is closer to the
sink as the first S-aggregator, they will choose the same S-aggregator
. Therefore their packets can be aggregated at the
first S-aggregator, and this completes the proof.
Though in this section we assume that the size of an event
is smaller than the size of the cell, our approach can still work
187
correctly and perform more efficiently than DAA even if the
size of the event is not known in advance. This is because
the nodes will use Dynamic Forwarding over ToD only at
second phase where the aggregation by DAA is no longer
achievable. Therefore at worst our approach just falls back to
DAA. Section 5.1 shows that in experiments, ToD improves
the performance of DAA by 27% even if the size of the event
is greater than the size of a cell.
3.2.3
Clustering and Aggregator Selection
In this paper we use grid-clustering to construct the cells
and clusters. Although other clustering methods, such as
clustering based on hexagonal or triangular tessellation, can
also be used, we do not explore them further in this paper.
In principle any clustering would work as long as they satisfy
the following conditions. First, the size of the cell must
be greater than or equal to the maximum size of an event.
Second, the F-cluster and S-cluster must cover the cells that
an event may span, and the S-cluster must cover the adjacent
cells in different F-clusters.
As opposed to defining an arbitrary clustering, using grid-clustering
has two advantages. First, the size of the grid can
be easily determined by configuring the grid size as a network
parameter. Second, as long as the geographic location
is known to the node, the cell, F-cluster and S-cluster it belongs
to can be determined immediately without any communication
. Geographic information is essential in sensor
networks, therefore we assume that sensor nodes know their
physical location by configuration at deployment, a GPS device
, or localization protocols [39, 40]. As a consequence,
all the cells, F-clusters, and S-clusters can be implicitly constructed
.
After the grids are constructed, nodes in an F-cluster and
S-cluster have to select an aggregator for their cluster. Because
the node that acts as the aggregator consumes more
energy than other nodes, nodes should play the role of aggregator
in turn in order to evenly distribute the energy consumption
among all nodes. Therefore the aggregator selection
process must be performed periodically. However
the frequency of updating the aggregator can be very low,
from once in several hours to once in several days, depending
on the capacity of the battery on the nodes. Nodes can
elect themselves as the cluster-head with probability based
on metrics such as the residual energy, and advertise to all
nodes in its cluster. In case two nodes select themselves as
the cluster-head, the node-id can be used to break the tie.
The other approach is that the nodes in a cluster use a
hash function to hash the current time to a node within that
cluster, and use that node as the aggregator. Nodes have to
know the address of all nodes in its F-cluster and sort them
by their node id. A hash function hashes the current time
to a number k from 1 to n where n is the number of nodes
in its cluster, and nodes use the k
th
node as the aggregator.
Because the frequency of changing the aggregator could be
low, the time used could be in hours or days, therefore the
time only needs to be coarsely synchronized, and the cluster-head
election overhead can be avoided.
However, the Dynamic Forwarding approach requires that
each F-aggregator knows the location of S-aggregators of S-clusters
that its F-cluster overlaps with. Therefore each time
the S-aggregator changes, it has to notify the F-aggregators.
To simplify the cluster-head selection process and avoid the
overhead of propagating the update information, we delegate
the role of S-aggregators to F-aggregators. Instead of selecting
a node as the S-aggregator and changing it periodically
for an S-cluster, we choose an F-cluster, called Aggregating
Cluster, for each S-cluster, and use the F-aggregator of the
Aggregating Cluster as its S-aggregator. The Aggregating
Cluster of an S-cluster is the F-cluster which is closest to the
sink among all F-clusters that the S-cluster overlaps with,
as shown in Fig. 10(a), assuming that the sink is located
on the bottom-left corner. Therefore as the F-aggregator
changes, the corresponding S-aggregator changes as well.
When an F-aggregator forwards a packet to an S-aggregator,
it forwards the packet toward the Aggregating Cluster of
that S-aggregator. When the packet reaches the Aggregating
Cluster, nodes in that F-cluster know the location of its
F-aggregator and can forward the packet to it. Therefore no
aggregator update has to be propagated to neighboring clusters
.
F-cluster S-cluster
The common aggregator for both
the shaded F-cluster and S-cluster
(a)
(b)
F-aggregator
F-aggregator and 1
st
S-aggregator
2
nd
S-aggregator
Figure 10.
(a) The S-cluster selects the F-cluster closest to the sink
among its overlapped F-clusters, assuming that the sink is located at
the bottom-left corner of the network. (b) The white F-aggregator selects
the F-cluster containing the gray F-aggregator as the aggregating
cluster.
Now the role of S-aggregators is passed on to the F-aggregators
, and the F-cluster selected by an S-aggregator
is the one closer to the sink. When an F-aggregator wants to
forward packets to both S-aggregators, it selects the F-cluster
that is closer to itself as the aggregating cluster of the first
S-aggregator (could be itself) to reduce the number of transmissions
between aggregators, as shown in Fig. 10(b). This
selection does not affect the property that packets will eventually
be aggregated at one aggregator because the S-clusters
that cover the cells in two F-clusters are the same, therefore
the aggregating cluster selected by two F-aggregators will be
the same.
The benefits of using this approach are five-fold. First,
no leader election is required for S-clusters, which eliminates
the leader election overhead. Second, nodes only
need to know the F-aggregator of its F-cluster, which make
this approach very scalable. Third, when the F-aggregator
changes, the S-aggregator changes as well, but the change
does not need to be propagated to other F-clusters or S-clusters
. Fourth, if nodes choose the aggregator by hashing
current time to get a node id of the aggregator in its cluster,
188
only nodes within the same F-cluster need to be synchronized
with each other. And last, since the Aggregating Clusters
of S-clusters are statically computed, there is no packet
overhead for computing the Aggregating Clusters.
Performance Analysis
In this section we show that the maximum distance between
any two adjacent nodes in ToD only depends on the
size of the cells, and is independent of the size of the network
. We ignore the cost from the aggregator to the sink
since for perfect aggregation, only one packet will be forwarded
to the sink from the aggregator, therefore the cost is
comparatively small. Compared to the lower bound O
(n)
[36] of the grid network for a fixed tree, ToD can achieve
constant factor even in the worst case.
u
v
s
f
u
f
v
Figure 11.
The worst case scenario for ToD.
The worst case in ToD is illustrated in Fig. 11 where only
two adjacent nodes, u and v, in the corner of two different
F-clusters generate packets, and their F-aggregators, f
u
and
f
v
, are located at the opposite corner. We assume a dense
deployment of sensor nodes, therefore the distance between
two nodes can be transferred to the cost of transmitting a
packet between these nodes. Fig. 11 is the worst case since if
more nodes are generating packets in one cluster, it will only
amortize the cost of sending packets from the F-aggregator
to the S-aggregator, and more nodes in multiple F-clusters
generating packets will only lower the average distance.
We assume that the length of one side of the cell is
, and
two nodes are adjacent if their distance is less than a unit of
distance. Therefore in Fig. 11 the distance that packets from
u and v have to be forwarded before they are aggregated at
s is the sum of distances between u to f
u
to s and v to f
v
to s, and is
(22 + 42) + (22 + 4) = 82 + 4.
Therefore in the optimal approach, only one transmission is
required because u and v are adjacent. In ToD, 8
2 + 4
number of transmission is required for the worst case.
However, since we use DAA as the aggregation technique,
packets from adjacent nodes will be aggregated immediately
. Therefore for the worst cast to happen, the distance
between u and v must be at least 2 units, and our protocol
has 4
2+2 7.66 times number of transmissions than
optimal. The upper bound is only dependent on the size of
a cell, and the size of the cell is dependent on the size of an
event. This value is independent of the size of the network
and therefore is very suitable for large-scale networks.
On average, the number of transmissions will be much
less than 4
2 + 2 because first, typically there will be
many nodes generating packets. Second, the distance between
a node and its F-aggregator is not always 2
2,
and the distances between the F-aggregators and the S-aggregator
are shorter, too. Third, the DAA approach can efficiently
aggregate packets from adjacent nodes thereby further
reducing the number of transmissions. Therefore we
expect the average distance for nodes generating packets to
be much less than the worst case.
Performance Evaluation
In this section we use experiments and simulations to
evaluate the performance of our semi-structured approach
and compare it with other protocols.
5.1
Testbed Evaluation
We conduct experiments with 105 Mica2-based nodes on
a sensor testbed. The testbed consists of 105 Mica2-based
motes and each mote is hooked onto a Stargate. The Stargate
is a 32-bit hardware device from CrossBow [41] running
Linux, which has an Ethernet interface and a serial port
for connecting a mote. The Stargates are connected to the
server using wired Ethernet. Therefore we can program these
motes and send messages and signals to them through Stargates
via Ethernet connection. The 105 nodes form a 7
15
grid network with 3 feet spacing. The radio signal using default
transmission power covers a lot of nodes in the testbed.
In our experiments we do not change the transmission power
but limit nodes only to receive packets from nodes within
two grid neighbors away, i.e. each node has maximum 12
neighbors.
We implement an Anycast MAC protocol on top of the
Mica2 MAC layer. The Anycast MAC layer has its own
backoff and retransmission mechanisms and we disable the
ACK and backoff of the Mica2 MAC module. The Anycast
MAC implements the RTS-CTS-DATA-ACK for anycast
. An event is emulated by broadcasting a message on
the testbed to the Stargates, and the Stargates send the message
to the Mica2 nodes through serial port. The message
contains a unique ID distinguishing packets generated at different
time.
When a node is triggered by an event, an event report is
generated. If the node has to delay its transmission, it stores
the packet in a report queue. Both the application layer and
Anycast MAC layer can access the queue, therefore they can
check if the node has packets for aggregation, or aggregate
the received packets to packets in the queue.
First we evaluate the following protocols on the testbed
and the codes are available on-line
2
:
Dynamic Forwarding over ToD (ToD). The semi-structured
approach we proposed in this paper. DAA
is used to aggregate packets in each F-cluster, and aggregated
packets are forwarded to the sink on ToD.
Data Aware Anycast (DAA). The structure-less approach
proposed in [20].
Shortest Path Tree (SPT). Nodes send packets to the
sink through the shortest path tree immediately after
sensing an event. Aggregation is opportunistic and happens
only if two packets are at the same node at the
same time. The shortest path tree is constructed immediately
after the network is deployed. A message is
2
http://www.cse.ohio-state.edu/
fank/research/tod.tar.gz
189
broadcast from the sink and flooded into the network to
create a shortest path tree from all nodes to the sink.
Shortest Path Tree with Fixed Delay (SPT-D) Same
as the SPT approach, but nodes delay their transmission
according to their height in the tree to wait for packets
from their children.
Due to the scale of the testbed, in ToD we only divide the
network into two F-clusters, which forces the smallest cell to
have only 9 sensor nodes. However we do not limit the size
of an event to be smaller than the cell size. The event size is
larger than the cell size in all following experiments.
We use the normalized number of transmissions as the
metric to compare the performance of these protocols. The
normalized number of transmissions is the average number
of transmissions performed in the entire network to deliver
one unit of useful information from sources to the sink. It
can be converted to the normalized energy consumption if
we know the sending and receiving cost of one transmission,
therefore the energy spent on data collection for one packet
can be derived. We do not consider energy consumption on
idle listening here since all nodes are fully active for all protocols
in the experiments and simulations, and the idle energy
consumption would be similar for all protocols. To reduce
the energy consumption on idle listening, various duty
cycling protocols have been proposed. However, due to the
page limitation, we are unable to describe how to integrate
those works.
Fig. 12 shows the normalized number of transmissions
for different event sizes. We fixed the location of the event
and vary its diameter from 12 ft to 36 ft where nodes within
two grid-hops to six grid-hops of the event will be triggered
respectively and send packets to the sink located at one corner
of the network. We use 6 seconds as maximum delay
for all protocols except SPT. For event size less than 12 ft,
there are too little nodes been triggered (less than five), and
all triggered nodes are within transmission range. Data aggregation
is not so interesting in such scenario therefore we
do not evaluate it. Actually DAA can perform best since all
packets can be aggregated because all triggered nodes are
within transmission range of each other.
All protocols have better performance when the size of
the event increases because packets have more chances of being
aggregated. ToD performs best among all protocols in all
scenarios. This shows that DAA can efficiently achieve early
aggregation and the Dynamic Forwarding over ToD can effectively
reduce the cost of directly forwarding unaggregated
packets to the sink in DAA. In SPT-D, when the event size
is smaller, the long stretch effect is more significant than in
larger event scenario. When event size is large, for example
, two-third of nodes in the network are triggered when the
diameter of the event is 36 feet, most of the packets can be
aggregated to their parent with one transmission. This indicates
that in applications where most nodes are transmitting,
the fixed structure such as SPT-D is better, but when only
a small subset of nodes are transmitting, their performance
degrades because of the long stretch problem.
We notice that the variance of some results in SPT and
SPT-D is very high. For example, when the event size is 12
feet in diameter, the maximum normalized number of trans-0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
36
30
24
18
12
Normalized Number of Transmissions
Event Size (ft)
ToD
DAA
SPT
SPT-D
Figure 12.
The normalized number of transmissions for different
event sizes from experiments on 105 sensors.
missions in SPT-D is 3
.41, and the minimum value is 2.41.
By tracing into the detail experiment logs we found that the
high variance is because of the different shortest path trees.
The tree is re-constructed for each experiment, and therefore
may change from experiment to experiment. We found that
SPT-D always gets better performance in one tree where all
sources are under the same subtree, and performs badly in
the other tree where sources are located under two or three
different subtrees. This further supports our claims that the
long stretch problem in fixed structured approaches affects
their performance significantly.
The second experiment evaluates the performance of
these protocols for different values of maximum delay. We
vary the delay from 0 to 8 seconds, and all nodes in the
network generate one packet every 10 seconds. Fig. 13
shows the results. As we described, the performance of
the structure-based approaches heavily depends on the delay
. The SPT-D performs worse than ToD when the maximum
delay is less than five seconds, and the performance
increases as the delay increases. On the contrary, the performance
of ToD and DAA does not change for different delays,
which is different from results observed in [20]. We believe
that this is because with the default transmission power, a
large number of nodes are in interference range when nodes
transmit. Therefore even if nodes do not delay their transmissions
, only one node can transmit at any given time. Other
nodes will be forced to delay, which has the same effect as
the Randomized Waiting.
5.2
Large Scale Simulation
To evaluate and compare the performance and scalability
of ToD with other approaches requires a large sensor
network, which is currently unavailable in real experiments.
Therefore we resort to simulations. In this section we use the
ns2 network simulator to evaluate these protocols. Besides
ToD, DAA, and SPT, we evaluate OPT, Optimal Aggregation
Tree, to replace the SPT-D protocol.
In OPT, nodes forward their packets on an aggregation
tree rooted at the center of the event. Nodes know where to
forward packets to and how long to wait. The tree is constructed
in advance and changes when the event moves assuming
the location and mobility of the event are known.
190
1
1.2
1.4
1.6
1.8
2
2.2
0
1
2
3
4
5
6
7
8
Normalized Number of Transmissions
Maximum Delay (s)
ToD
DAA
SPT
SPT-D
Figure 13.
The normalized number of transmissions for different
maximum delays from experiments on 105 sensors.
Ideally only n
- 1 transmissions are required for n sources.
This is the lower bound for any structure, therefore we use it
as the optimal case. This approach is similar to the aggregation
tree proposed in [4] but without its tree construction and
migration overhead. We do not evaluate SPT-D in simulation
because SPT-D is not practical in the large scale network. In
the largest simulation scenario, the network is a 58-hop network
. According to the simulation in smaller network, SPT-D
gets best performance when the delay of each hop is about
0
.64 seconds. This makes nodes closer to the sink have about
36 seconds delay in SPT-D, which is not advisable.
We perform simulations of these protocols on a 2000m
1200m grid network with 35m node separation, therefore
there are a total of 1938 nodes in the network. The data
rate of the radio is 38
.4Kbps and the transmission range of
the nodes is slightly higher than 50m. An event moves in the
network using the random way-point mobility model at the
speed of 10m
/s for 400 seconds. The event size is 400m in
diameter. The nodes triggered by an event will send packets
every five seconds to the sink located at
(0,0). The aggregation
function evaluated here is perfect aggregation, i.e. all
packets can be aggregated into one packet without increasing
the packet size.
5.3
Event Size
We first evaluate the performance for these protocols on
different number of nodes generating the packets. This simulation
reflects the performance of each protocol for different
event sizes. We study the performance for 4 mobility scenarios
and show the average, maximum, and minimum values
of the results.
Fig. 14(a) shows the result of normalized number of
transmissions. ToD improves the performance of DAA and
SPT by 30% and 85%, and is 25% higher than OPT. However
OPT has the best performance by using the aggregation
tree that keeps changing when event moves but its overhead
is not considered in the simulation. SPT has very poor performance
since its aggregation is opportunistic. Except the
SPT, the performance of all other protocols is quite steady.
This shows that they are quite scalable in terms of the event
size.
Fig. 14(b) and (c) show the total number of transmissions
and total units of useful information received by the sink.
As observed in [20], DAA and ToD have higher number of
received packets than OPT due to the ability of structure-less
aggregation to aggregate packets early and scatter them
away from each other to reduce contention. ToD performs
better than DAA in terms of the normalized number of transmissions
because of its ability to aggregate packets at nodes
closer to the source, and thus it reduces the cost of forwarding
packets from sources to the sink.
5.4
Scalability
In this set of simulations we evaluate the scalability of our
protocol since our goal is to design a scalable data aggregation
protocol. If a protocol is not scalable, its performance
will degrade as the size of the network increases.
To evaluate the scalability of a protocol, we limit an event
to move only in a bounded region at a certain distance from
the sink to simulate the effect of different network sizes. We
limit an event to move within a 400m
1200m rectangle,
and change the distance of the rectangle to the sink from
200m to 1400m, as shown in Fig. 15. In order to be fair
to all scenarios, we limit the event not to move closer than
200m to the network boundary such that the number of nodes
triggered by the event does not change drastically.
2000m
1200m
200m
400m
200m
Figure 15.
The simulation scenario for scalability. The event is limited
to move only within a small gray rectangle in each simulation.
Fig. 16 shows the results of the scalability simulation.
The performance of ToD and OPT remains steady, and ToD
is 22% higher than OPT. This shows that ToD is quite scalable
as its performance does not degrade as the size of the
network increases. The performance of both DAA and SPT
degrades as the size of the network increases. The normalized
number of transmissions for DAA and SPT doubled
when the event moves from the closest rectangle (to the sink)
to the farthest rectangle.
Fig. 16(c) shows the number of packets received at the
sink per event. If all packets can be aggregated near the
event and forwarded to the sink, the sink will receive only
one packet. Conversely, more packets received at the sink
shows that fewer aggregations happened in the network. The
cost of forwarding more packets to the sink increases rapidly
as the size of the network increases. We can see that in both
DAA and SPT the sink receives many packets. Though the
number of packets received at the sink remains quite steady,
the total number of transmissions increases linearly as the
distance from the sources to the sink increases.
Ideally the number of received packets at sink is 1, if all
packets can be aggregated at the aggregator. However the
number of received packets at sink is higher than 1 in ToD
191
0
5
10
15
20
25
30
35
500
400
300
200
Normalized Number of Transmissions
Event Size (m)
ToD
DAA
SPT
OPT
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
100000
500
400
300
200
Number of Total Transmissions
Event Size (m)
ToD
DAA
SPT
OPT
0
2000
4000
6000
8000
10000
12000
14000
16000
500
400
300
200
Unit of Received Information
Event Size (m)
ToD
DAA
SPT
OPT
(a) Normalized number of transmissions
(b) Number of transmissions
(c) Unit of received information
Figure 14.
The simulation results for different event sizes.
and OPT. This is because the delay in CSMA-based MAC
protocol can not be accurately predicted therefore the aggregator
might send the packet to the sink before all packets
are forwarded to it. Though the cost of forwarding the unaggregated
packets from aggregator to the sink in ToD and
OPT also increases when the size of the network increases,
the increase is comparably smaller than DAA and SPT because
few packets are forwarded to the sink without aggregation
.
We observe that the number of received packets at the sink
in ToD is higher when the event is closer to the sink. In our
simulation, nodes in the same F-cluster as the sink will always
use sink as the F-aggregator because we assume that
the sink is wire powered therefore there is no need to delegate
the role of aggregator to other nodes in order to evenly
distribute the energy consumption.
5.5
Cell Size
The above simulations use the maximum size of an event
as the cell size. As we described in Section 3.2.2, this ensures
that the Dynamic Forwarding can aggregate all packets at
an S-aggregator, and the cost of forwarding the aggregated
packets to the sink is minimized. However, large cell size
increases the cost of aggregating packets to the aggregator as
we use DAA as the aggregation technique in an F-cluster. In
this section we evaluate the impact of the size of a cell on the
performance of ToD.
We vary the cell size from 50m
50m to 800m 800m
and run simulations for three different event sizes, which are
200m, 400m, and 600m in diameter. The results are collected
from five different event mobility patterns and shown in Fig.
17.
When the size of cell is larger than the event size, the performance
is worse because the cost of aggregating packets
to F-aggregator increases, but the cost of forwarding packets
from S-aggregator does not change. When the size of cell
is too small, the cost of forwarding packets to sink increases
because packets will be aggregated at different F-aggregators
and more packets will be forwarded to the sink without further
aggregation. In general, when the size of the F-cluster
is small enough to only contain one node, or when the size
of the F-cluster is large enough to include all nodes in the
network, ToD just downgrades to DAA.
ToD has the best performance when the cell size is
100m
100m (F-cluster size is 200m200m) when the event
size is 200m in diameter. When the diameter of an event
is 400m and 600m, using 200m
200m as the cell size has
the best performance (F-cluster size is 400m
400m). This
shows that the ToD performance can be further optimized by
selecting the appropriate cell size. To explore the relation
between the event and cell size for optimization will be part
of our future work.
Conclusion
In this paper we propose a semi-structured approach that
locally uses a structure-less technique followed by Dynamic
Forwarding on an implicitly constructed packet forwarding
structure, ToD, to support network scalability. ToD avoids
the long stretch problem in fixed structured approaches and
eliminates the overhead of construction and maintenance of
dynamic structures. We evaluate its performance using real
experiments on a testbed of 105 sensor nodes and simulations
on 2000 node networks. Based on our studies we find
that ToD is highly scalable and it performs close to the optimal
structured approach. Therefore, it is very suitable for
conserving energy and extending the lifetime of large scale
sensor networks.
References
[1] W. Heinzelman, A. Chandrakasan, and
H. Balakrishnan, "Energy-Efficient Communication
Protocol for Wireless Microsensor Networks," in
Proceedings of the 33rd Annual Hawaii International
Conference on System Sciences, vol. 2, January 2000.
[2] W. Heinzelman, A. Chandrakasan, and
H. Balakrishnan, "An Application-Specific Protocol
Architecture for Wireless Microsensor Networks," in
IEEE Transactions on Wireless Communications,
vol. 1, October 2002, pp. 660670.
[3] C. Intanagonwiwat, D. Estrin, and R. Goviindan,
"Impact of Network Density on Data Aggregation in
Wireless Sensor Networks," in Technical Report
01-750, University of Southern California, November
2001.
[4] W. Zhang and G. Cao, "Optimizing Tree
Reconfiguration for Mobile Target Tracking in Sensor
Networks," in Proceedings of INFOCOM 2004, vol. 4,
March 2004, pp. 24342445.
192
0
5
10
15
20
25
30
35
40
400
600
800
1000
1200
1400
1600
Normalized Number of Transmissions
Distance to the Sink (m)
ToD
DAA
SPT
OPT
0
2
4
6
8
10
400
600
800
1000
1200
1400
1600
Normalized Number of Transmissions
Distance to the Sink (m)
ToD
DAA
OPT
0
5
10
15
20
400
600
800
1000
1200
1400
1600
Number of Received Packets
Distance to the Sink (m)
ToD
DAA
SPT
OPT
(a) Normalized number of transmission
(b) Zoom in of Fig. 16(a)
(c) Number of received packets
Figure 16.
The simulation results for difference distances from the event to the sink.
3
3.5
4
4.5
5
5.5
6
6.5
7
7.5
8
100
200
300
400
500
600
700
800
Normalized Number of Transmissions
Cell Size (m)
200m
400m
600m
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
100
200
300
400
500
600
700
800
Number of Total Transmissions
Cell Size (m)
200m
400m
600m
0
100
200
300
400
500
600
700
800
900
1000
1100
100
200
300
400
500
600
700
800
Number of Received Packets
Cell Size (m)
200m
400m
600m
(a) Normalized number of transmissions
(b) Number of transmissions
(c) Number of received packets
Figure 17.
The simulation results for difference cell sizes.
[5] W. Zhang and G. Cao, "DCTC: Dynamic Convoy
Tree-based Collaboration for Target Tracking in
Sensor Networks," in IEEE Transactions on Wireless
Communications, vol. 3, September 2004, pp.
16891701.
[6] M. Ding, X. Cheng, and G. Xue, "Aggregation Tree
Construction in Sensor Networks," in Proceedings of
the 58th IEEE Vehicular Technology Conference,
vol. 4, October 2003, pp. 21682172.
[7] H. Luo, J. Luo, and Y. Liu, "Energy Efficient Routing
with Adaptive Data Fusion in Sensor Networks," in
Proceedings of the Third ACM/SIGMOBILEe
Workshop on Foundations of Mobile Computing,
August 2005.
[8] A. Goel and D. Estrin, "Simultaneous Optimization
for Concave Costs: Single Sink Aggregation or Single
Source Buy-at-Bulk," in Proceedings of the 14th
Annual ACM-SIAM Symposium on Discrete
Algorithms, 2003.
[9] "Networked Infomechanical Systems,"
http://www.cens.ucla.edu .
[10] "Center for Embedded Networked Sensing at UCLA,"
http://www.cens.ucla.edu .
[11] J. Polastre, "Design and Implementation of Wireless
Sensor Networks for Habitat Mon itoring," Master's
Thesis, University of California at Berkeley, Spring
2003.
[12] A. Mainwaring, R. Szewczyk, J. Anderson, and
J. Polastre, "Habitat Monitoring on Great Duck
Island," http://www.greatduckisland.net.
[13] A. Arora, P. Dutta, and S. Bapat, "Line in the Sand: A
Wireless Sensor Network for Target Detection,
Classification, and Tracking,"
OSU-CISRC-12/03-TR71, 2003.
[14] "ExScal," http://www.cast.cse.ohio-state.edu/exscal/.
[15] S. Corporation, "Chemical/Bio Defense and Sensor
Networks,"
http://www.sentel.com/html/chemicalbio.html .
[16] E. L. Lawler, J. K. Lenstra, A. H. G. R. Kan, and D. B.
Shmoys, The Traveling Salesman Problem : A Guided
Tour of Combinatorial Optimization.
John Wiley &
Sons, 1985.
[17] R. Cristescu, B. Beferull-Lozano, and M. Vetterli, "On
Network Correlated Data Gathering," in Proceedings
of the 23rd Annual Joint Conference of the IEEE
Computer and Communications Societies, vol. 4,
March 2004, pp. 25712582.
[18] K. W. Fan, S. Liu, and P. Sinha, "Structure-free Data
Aggregation in Sensor Networks," in
OSU-CISRC-4/06-TR35, Technical Report, Dept of
CSE, OSU, April 2006.
[19] Y. Zhu, K. Sundaresan, and R. Sivakumar, "Practical
Limits on Achievable Energy Improvements and
193
Useable Delay Tolerance in Correlation Aware Data
Gathering in Wireless Sensor Networks," in IEEE
Second Annual IEEE Communications Society
Conference on Sensor and Ad Hoc Communications
and Networks, September 2005.
[20] K. W. Fan, S. Liu, and P. Sinha, "On the potential of
Structure-free Data Aggregation in Sensor Networks,"
in To be appear in Proceedings of INFOCOM 2006,
April 2006.
[21] C. Intanagonwiwat, R. Govindan, and D. Estrin,
"Directed Diffusion: A Scalable and Robust
Communication Paradigm for Sensor Networks," in
Proceedings of the 6th Annual International
Conference on Mobile Computing and Networking,
August 2000, pp. 5667.
[22] C. Intanagonwiwat, R. Govindan, D. Estrin,
J. Heidemann, and F. Silva, "Directed Diffusion for
Wireless Sensor Networking," in IEEE/ACM
Transactions on Networking, vol. 11, February 2003,
pp. 216.
[23] S. Madden, M. J. Franklin, J. M. Hellerstein, and
W. Hong, "TAG: a Tiny AGgregation Service for
Ad-Hoc Sensor Networks," in Proceedings of the 5th
symposium on Operating systems design and
implementation, December 2002, pp. 131146.
[24] S. Madden, R. Szewczyk, M. J. Franklin, and
D. Culler, "Supporting Aggregate Queries Over
Ad-Hoc Wireless Sensor Networks," in Proceedings of
the 4th IEEE Workshop on Mobile Computing Systems
and Applications, June 2004, pp. 4958.
[25] S. Lindsey, C. S. Raghavendra, and K. M. Sivalingam,
"Data Gathering in Sensor Networks using the
Energy*Delay Metric," in Proceedings 15th
International Parallel and Distributed Processing
Symposium, April 2001, pp. 20012008.
[26] S. Lindsey and C. Raghavendra, "PEGASIS:
Power-efficient gathering in sensor information
systems," in Proceedings of IEEE Aerospace
Conference, vol. 3, March 2002, pp. 11251130.
[27] S. Lindsey, C. Raghavendra, and K. M. Sivalingam,
"Data Gathering Algorithms in Sensor Networks
Using Energy Metrics," in IEEE Transactions on
Parallel and Distributed Systems, vol. 13, September
2002, pp. 924935.
[28] J. Wong, R. Jafari, and M. Potkonjak, "Gateway
placement for latency and energy efficient data
aggregation," in 29th Annual IEEE International
Conference on Local Computer Networks, November
2004, pp. 490497.
[29] B. J. Culpepper, L. Dung, and M. Moh, "Design and
Analysis of Hybrid Indirect Transmissions (HIT) for
Data Gathering in Wireless Micro Sensor Networks,"
in ACM SIGMOBILE Mobile Computing and
Communications Review, vol. 8, January 2004, pp.
6183.
[30] H. F. Salama, D. S. Reeves, and Y. Viniotis,
"Evaluation of Multicast Routing Algorithms for
Real-time Communication on High-speed Networks,"
in IEEE Journal on Selected Area in Communications,
vol. 15, April 1997.
[31] A. Scaglione and S. D. Servetto, "On the
Interdependence of Routing and Data Compression in
Multi-Hop Sensor Networks," in Proceedings of the
8th Annual International Conference on Mobile
Computing and Networking, September 2002, pp.
140147.
[32] A. Scaglione, "Routing and Data Compression in
Sensor Networks: Stochastic Models for Sensor Data
that Guarantee Scalability," in Proceedings of IEEE
International Symposium on Information Theory, June
2003, p. 174.
[33] R. Cristescu and M. Vetterli, "Power Efficient
Gathering of Correlated Data: Optimization,
NP-Completeness and Heuristics," in Summaries of
MobiHoc 2003 posters, vol. 7, July 2003, pp. 3132.
[34] S. Pattern, B. Krishnamachari, and R. Govindan, "The
Impact of Spatial Correlation on Routing with
Compression in Wireless Sensor Networks," in
Proceedings of the 3rd International Symposium on
Information Processing in Sensor Networks, April
2004, pp. 2835.
[35] L. Cai and D. Corneil, "Tree Spanners," in SIAM
Journal of Discrete Mathematics, vol. 8, 1995.
[36] N. Alon, R. M. Karp, D. Peleg, and D. West, "A graph
theoretic game and its application to the k-server
problem," in SIAM Journal of Computing, vol. 24,
1995.
[37] D. Peleg and D. Tendler, "Low Stretch Spanning Trees
for Planar Graphs," in Technical Report MCS01-14,
Mathematics & Computer Sience, Weizmann Institute
Of Sience, 2001.
[38] L. Jia, G. Noubir, R. Rajaraman, and R. Sundaram,
"GIST: Group-Independent Spanning Tree for Data
Aggregation in Dense Sensor Networks," in
International Conference on Distributed Computing in
Sensor Systems, June 2006.
[39] N. Bulusu, J. Heidemann, and D. Estrin, "GPS-less
Low Cost Outdoor Localization For Very Small
Devices," in IEEE Personal Communications, Special
Issue on "Smart Spaces and Environments", vol. 7,
October 2000.
[40] D. Moore, J. Leonard, D. Rus, and S. Teller, "Robust
Distributed Network Localization with Noisy Range
Measurements," in Proceedings of 2nd ACM Sensys,
pp. 5061.
[41] Crossbow, "Crossbow," http://www.xbow.com.
194
| ToD;Structure-free;Anycasting;Data Aggregation |
173 | Scalable Mining of Large Disk-based Graph Databases | Mining frequent structural patterns from graph databases is an interesting problem with broad applications. Most of the previous studies focus on pruning unfruitful search subspaces effectively, but few of them address the mining on large, disk-based databases. As many graph databases in applications cannot be held into main memory, scalable mining of large, disk-based graph databases remains a challenging problem. In this paper, we develop an effective index structure, ADI (for adjacency index), to support mining various graph patterns over large databases that cannot be held into main memory. The index is simple and efficient to build. Moreover, the new index structure can be easily adopted in various existing graph pattern mining algorithms. As an example , we adapt the well-known gSpan algorithm by using the ADI structure. The experimental results show that the new index structure enables the scalable graph pattern mining over large databases. In one set of the experiments, the new disk-based method can mine graph databases with one million graphs, while the original gSpan algorithm can only handle databases of up to 300 thousand graphs. Moreover, our new method is faster than gSpan when both can run in main memory. | INTRODUCTION
Mining frequent graph patterns is an interesting research
problem with broad applications, including mining structural patterns from chemical compound databases, plan
databases, XML documents, web logs, citation networks,
and so forth. Several efficient algorithms have been proposed
in the previous studies [2, 5, 6, 8, 11, 9], ranging
from mining graph patterns, with and without constraints,
to mining closed graph patterns.
Most of the existing methods assume implicitly or explic-itly
that the databases are not very large, and the graphs
in the database are relatively simple. That is, either the
databases or the major part of them can fit into main memory
, and the number of possible labels in the graphs [6] is
small. For example, [11] reports the performance of gSpan,
an efficient frequent graph pattern mining algorithm, on
data sets of size up to 320 KB, using a computer with 448
MB main memory.
Clearly, the graph database and the
projected databases can be easily accommodated into main
memory.
Under the large main memory assumption, the computation
is CPU-bounded instead of I/O-bounded. Then, the
algorithms focus on effective heuristics to prune the search
space. Few of them address the concern of handling large
graph databases that cannot be held in main memory.
While the previous studies have made excellent progress
in mining graph databases of moderate size, mining large,
disk-based graph databases remains a challenging problem.
When mining a graph database that cannot fit into main
memory, the algorithms have to scan the database and navigate
the graphs repeatedly. The computation becomes I/O-bounded
.
For example, we obtain the executable of gSpan from the
authors and test its scalability. In one of our experiments
1
,
we increase the number of graphs in the database to test
the scalability of gSpan on the database size. gSpan can
only handle up to 300 thousand graphs. In another experiment
, we increase the number of possible labels in graphs.
We observe that the runtime of gSpan increases exponentially
. It finishes a data set of 300 thousand graphs with 636
seconds when there are only 10 possible labels, but needs
15 hours for a data set with the same size but the number
of possible labels is 45! This result is consistent with the
results reported in [11].
Are there any real-life applications that need to mine large
graph databases? The answer is yes. For example, in data
integration of XML documents or mining semantic web, it is
often required to find the common substructures from a huge
collection of XML documents. It is easy to see applications
with collections of millions of XML documents. There are
1
Details will be provided in Section 6
316
Research Track Paper
hundreds of even thousands of different labels. As another
example, chemical structures can be modeled as graphs. A
chemical database for drug development can contain millions
of different chemical structures, and the number of different
labels in the graphs can easily go to up to 100. These large
databases are disk-based and often cannot be held into main
memory.
Why is mining large disk-based graph databases so challenging
? In most of the previous studies, the major data
structures are designed for being held in main memory. For
example, the adjacency-list or adjacency-matrix representations
are often used to represent graphs. Moreover, most of
the previous methods are based on efficient random accesses
to elements (e.g., edges and their adjacent edges) in graphs.
However, if the adjacency-list or adjacency-matrix representations
cannot be held in main memory, the random accesses
to them become very expensive. For disk-based data, without
any index, random accesses can be extremely costly.
Can we make mining large, disk-based graph databases feasible
and scalable? This is the motivation of our study.
Since the bottleneck is the random accesses to the large
disk-based graph databases, a natural idea is to index the
graph databases properly. Designing effective and efficient
index structures is one of the most invaluable exercises in
database research. A good index structure can support a
general category of data access operations. Particularly, a
good index should be efficient and scalable in construction
and maintenance, and fast for data access.
Instead of inventing new algorithms to mine large, disk-based
graph patterns, can we devise an efficient index structure
for graph databases so that mining various graph patterns
can be conducted scalably? Moreover, the index structure
should be easy to be adopted in various existing methods
with minor adaptations.
Stimulated by the above thinking, in this paper, we study
the problem of efficient index for scalable mining of large,
disk-based graph databases, and make the following contributions
.
By analyzing the frequent graph pattern mining problem
and the typical graph pattern mining algorithms
(taking gSpan as an example), we identify several bottleneck
data access operations in mining large, disk-based
graph databases.
We propose ADI (for adjacency index), an effective
index structure for graphs. We show that the major
operations in graph mining can be facilitated efficiently
by an ADI structure. The construction algorithm of
ADI structure is presented.
We adapt the gSpan algorithm by using the ADI structure
on mining large, disk-based graph databases, and
achieve algorithm ADI-Mine. We show that ADI-Mine
outperforms gSpan in mining complex graph databases
and can mine much larger databases than gSpan.
A systematic performance study is reported to verify
our design. The results show that our new index structure
and algorithm are scalable on large data sets.
The remainder of the paper is organized as follows. We
define the problem of frequent graph pattern mining in Section
2. The idea of minimum DFS code and algorithm gSpan
b
b
a
a
y
z
x
x
x
x
z
a
a
b
v3
v2
v1
v0
b
b
a
a
y
z
x
x
v3
v2
v0
v1
b
b
a
a
y
z
x
x
(a) Graph
(b) Subgraph
(c) DFS-tree
(d) DFS-tree
G
G
T
1
T
2
Figure 1: Subgraph and DFS codes
are reviewed in Section 3, and the major data access operations
in graph mining are also identified. The ADI structure
is developed in Section 4. The efficient algorithm ADI-Mine
for mining large, disk-based graph databases using ADI is
presented in Section 5.
The experimental results are reported
in Section 6. The related work is discussed in Section
7. Section 8 concludes the paper.
PROBLEM DEFINITION
In this paper, we focus on undirected labeled simple graphs.
A labeled graph is a 4-tuple G = (V, E, L, l), where V is a set
of vertices, E
V V is a set of edges, L is a set of labels,
and l : V
E L is a labeling function that assigns a label
to an edge or a vertex. We denote the vertex set and the
edge set of a graph G by V (G) and E(G), respectively.
A graph G is called connected if for any vertices u, v
V (G), there exist vertices w
1
, . . . , w
n
V (G) such that
{(u, w
1
), (w
1
, w
2
), . . . , (w
n-1
, w
n
), (w
n
, v)
} E(G).
Frequent patterns in graphs are defined based on subgraph
isomorphism.
Definition 1
(Subgraph isomorphism). Given graphs
G = (V, E, L, l) and G = (V , E , L , l ). An injective function
f : V
V is called a subgraph isomorphism from G to
G if (1) for any vertex u
V , f(u) V and l (u) = l(f(u));
and (2) for any edge (u, v)
E , (f(u), f(v)) E and
l
(u, v) = l(f (u), f (v)).
If there exists a subgraph isomorphism from G to G, then
G
is called a subgraph of G and G is called a supergraph of
G
, denoted as G
G.
For example, the graph G in Figure 1(b) is a subgraph of
G in Figure 1(a).
A graph database is a set of tuples (gid, G), where gid is
a graph identity and G is a graph. Given a graph database
GDB, the support of a graph G in GDB, denoted as sup(G )
for short, is the number of graphs in the database that are
supergraphs of G , i.e.,
|{(gid, G) GDB|G
G
}|.
For a support threshold min sup (0
min sup |GDB|),
a graph G is called a frequent graph pattern if sup(G )
min sup
. In many applications, users are only interested
in the frequent recurring components of graphs. Thus, we
put a constraint on the graph patterns: we only find the
frequent graph patterns that are connected.
Problem definition. Given a graph database GDB and
a support threshold min sup. The problem of mining frequent
connected graph patterns is to find the complete set of
connected graphs that are frequent in GDB.
317
Research Track Paper
MINIMUM DFS CODE AND GSPAN
In [11], Yan and Han developed the lexicographic ordering
technique to facilitate the graph pattern mining. They
also propose an efficient algorithm, gSpan, one of the most
efficient graph pattern mining algorithms so far.
In this
section, we review the essential ideas of gSpan, and point
out the bottlenecks in the graph pattern mining from large
disk-based databases.
3.1
Minimum DFS Code
In order to enumerate all frequent graph patterns efficiently
, we want to identify a linear order on a representation
of all graph patterns such that if two graphs are in identical
representation, then they are isomorphic. Moreover, all the
(possible) graph patterns can be enumerated in the order
without any redundancy.
The depth-first search tree (DFS-tree for short) [3] is popularly
used for navigating connected graphs.
Thus, it is
natural to encode the edges and vertices in a graph based
on its DFS-tree. All the vertices in G can be encoded in
the pre-order of T . However, the DFS-tree is generally not
unique for a graph. That is, there can be multiple DFS-trees
corresponding to a given graph.
For example, Figures 1(c) and 1(d) show two DFS-trees of
the graph G in Figure 1(a). The thick edges in Figures 1(c)
and 1(d) are those in the DFS-trees, and are called forward
edges, while the thin edges are those not in the DFS-trees,
and are called backward edges. The vertices in the graph
are encoded v
0
to v
3
according to the pre-order of the corresponding
DFS-trees.
To solve the uniqueness problem, a minimum DFS code
notation is proposed in [11].
For any connected graph G, let T be a DFS-tree of G.
Then, an edge is always listed as (v
i
, v
j
) such that i < j. A
linear order
on the edges in G can be defined as follows.
Given edges e = (v
i
, v
j
) and e = (v
i
, v
j
). e
e if (1)
when both e and e are forward edges (i.e., in DFS-tree T ),
j < j
or (i > i
j = j ); (2) when both e and e are
backward edges (i.e., edges not in DFS-tree T ), i < i or
(i = i
j < j ); (3) when e is a forward edge and e is a
backward edge, j
i ; or (4) when e is a backward edge and
e
is a forward edge, i < j .
For a graph G and a DFS-tree T , a list of all edges in
E(G) in order
is called the DFS code of G with respect to
T , denoted as code(G, T ). For example, the DFS code with
respect to the DFS-tree T
1
in Figure 1(c) is code(G, T
1
) =
(v
0
, v
1
, x, a, x)-(v
1
, v
2
, x, a, z)-(v
2
, v
0
, z, b, x)-(v
1
, v
3
, x, b, y) ,
where an edge (v
i
, v
j
) is written as (v
i
, v
j
, l(v
i
), l(v
i
, v
j
),
l(v
j
)), i.e., the labels are included. Similarly, the DFS code
with respect to the DFS-tree T
2
in Figure 1(d) is
code(G, T
2
) = (v
0
, v
1
, y, b, x)-(v
1
, v
2
, x, a, x)-(v
2
, v
3
, x, b, z)
(v
3
, v
1
, z, a, x) .
Suppose there is a linear order over the label set L. Then,
for DFS-trees T
1
and T
2
on the same graph G, their DFS
codes can be compared lexically according to the labels of
the edges. For example, we have code(G, T
1
) < code(G, T
2
)
in Figures 1(c) and 1(d).
The lexically minimum DFS code is selected as the representation
of the graph, denoted as min(G). In our example
in Figure 1, min(G) = code(G, T
1
).
Minimum DFS code has a nice property: two graphs G
and G are isomorphic if and only if min(G) = min(G ).
Moreover, with the minimum DFS code of graphs, the prob-Input
: a DFS code s, a graph database GDB and min sup
Output: the frequent graph patterns
Method:
if s is not a minimum DFS code then return;
output s as a pattern if s is frequent in GDB;
let C =
;
scan GDB once, find every edge e such that
e can be concatenated to s to form a DFS code s
e
and s
e is frequent; C = C
{s e};
sort the DFS codes in C in lexicographic order;
for each s e C in lexicographic order do
call gSpan(s e, GDB, min sup);
return;
Figure 2: Algorithm
gSpan.
lem of mining frequent graph patterns is reduced to mining
frequent minimum DFS codes, which are sequences, with
some constraints that preserve the connectivity of the graph
patterns.
3.2
Algorithm gSpan
Based on the minimum DFS codes of graphs, a depth-first
search, pattern-growth algorithm, gSpan, is developed
in [11], as shown in Figure 2. The central idea is to conduct
a depth-first search of minimum DFS codes of possible
graph patterns, and obtain longer DFS codes of larger
graph patterns by attaching new edges to the end of the
minimum DFS code of the existing graph pattern.
The
anti-monotonicity of frequent graph patterns, i.e., any super
pattern of an infrequent graph pattern cannot be frequent, is
used to prune.
Comparing to the previous methods on graph pattern
mining, gSpan is efficient, since gSpan employs the smart
idea of minimum DFS codes of graph patterns that facilitates
the isomorphism test and pattern enumeration. Moreover
, gSpan inherits the depth-first search, pattern-growth
methodology to avoid any candidate-generation-and-test. As
reported in [11], the advantages of gSpan are verified by the
experimental results on both real data sets and synthetic
data sets.
3.3
Bottlenecks in Mining Disk-based Graph
Databases
Algorithm gSpan is efficient when the database can be
held into main memory. For example, in [11], gSpan is scalable
for databases of size up to 320 KB using a computer
with 448 MB main memory. However, it may encounter difficulties
when mining large databases. The major overhead
is that gSpan has to randomly access elements (e.g., edges
and vertices) in the graph database as well as the projections
of the graph database many times. For databases that
cannot be held into main memory, the mining becomes I/O
bounded and thus is costly.
Random accesses to elements in graph databases and checking
the isomorphism are not unique to gSpan. Instead, such
operations are extensive in many graph pattern mining algorithms
, such as FSG [6] (another efficient frequent graph
pattern mining algorithm) and CloseGraph [9] (an efficient
algorithm for mining frequent closed graph patterns).
In mining frequent graph patterns, the major data access
operations are as follows.
318
Research Track Paper
OP1: Edge support checking. Find the support of an
edge (l
u
, l
e
, l
v
), where l
u
and l
v
are the labels of vertices
and l
e
is the label of the edge, respectively;
OP2: Edge-host graph checking. For an edge e
=
(l
u
, l
e
, l
v
), find the graphs in the database where e appears
;
OP3: Adjacent edge checking. For
an
edge
e
=
(l
u
, l
e
, l
v
), find the adjacent edges of e in the graphs
where e appears, so that the adjacent edges can be
used to expand the current graph pattern to larger
ones.
Each of the above operations may happen many times
during the mining of frequent graph patterns. Without an
appropriate index, each of the above operations may have to
scan the graph database or its projections. If the database
and its projections cannot fit into main memory, the scanning
and checking can be very costly.
Can we devise an index structure so that the related information
can be kept and all the above operations can be
achieved using the index only, and thus without scanning
the graph database and checking the graphs? This motivates
the design of the ADI structure.
THE ADI STRUCTURE
In this section we will devise an effective data structure,
ADI (for adjacency index), to facilitate the scalable mining
of frequent graph patterns from disk-based graph databases.
4.1
Data Structure
The ADI index structure is a three-level index for edges,
graph-ids and adjacency information. An example is shown
in Figure 3, where two graphs, G
1
and G
2
, are indexed.
4.1.1
Edge Table
There can be many edges in a graph database. The edges
are often retrieved by the labels during the graph pattern
mining, such as in the operations identified in Section 3.3.
Therefore, the edges are indexed by their labels in the ADI
structure.
In ADI, an edge e = (u, v) is recorded as a tuple
(l(u), l(u, v), l(v)) in the edge table, and is indexed by the
labels of the vertices, i.e., l(u) and l(v), and the label of
the edge itself, i.e., l(u, v). Each edge appears only once in
the edge table, no matter how many times it appears in the
graphs. For example, in Figure 3, edge (A, d, C) appears
once in graph G
1
and twice in graph G
2
. However, there
is only one entry for the edge in the edge table in the ADI
structure.
All edges in the edge table in the ADI structure are sorted.
When the edge table is stored on disk, a B+-tree is built on
the edges. When part of the edge table is loaded into main
memory, it is organized as a sorted list. Thus, binary search
can be conducted.
4.1.2
Linked Lists of Graph-ids
For each edge e, the identities of the graphs that contain
e form a linked list of graph-ids. Graph-id G
i
is in the list
of edge e if and only if there exists at least one instance of e
in G
i
. For example, in Figure 3, both G
1
and G
2
appear in
the list of edge (A, d, C), since the edge appears in G
1
once
and in G
2
twice. Please note that the identity of graph G
i
G1
G2
G1
G2
G2
G1
A
B
C
D
a
d
d
b
1
2
3
4
G1
A
B
C
C
D
B
a
c
d
d
d
1
2
3
4
5
G2
Edges
Block 1
Block 2
Graph-ids (on disk)
1 2
2 3
1 4
3 4
1 2
1 4
1 6
2 3
4 5
(A, a, B)
(A, d, C)
(B, b, D)
G2
G1
(B, c, C)
(B, d, D)
(C, d, D)
Adjacency (on disk)
6
Figure 3: An
ADI structure.
appears in the linked list of edge e only once if e appears in
G
i
, no matter how many times edge e appears in G
i
.
A list of graph-ids of an edge are stored together. Therefore
, given an edge, it is efficient to retrieve all the identities
of graphs that contain the edge.
Every entry in the edge table is linked to its graph-id
linked list. By this linkage, the operation OP2: edge-host
graph checking can be conducted efficiently. Moreover, to
facilitate operation OP1: edge support checking, the length
of the graph-id linked list, i.e., the support of an edge, is
registered in the edge table.
4.1.3
Adjacency Information
The edges in a graph are stored as a list of the edges
encoded. Adjacent edges are linked together by the common
vertices, as shown in Figure 3. For example, in block 1,
all the vertices having the same label (e.g., 1) are linked
together as a list. Since each edge has two vertices, only
two pointers are needed for each edge.
Moreover, all the edges in a graph are physically stored
in one block on disk (or on consecutive blocks if more space
is needed), so that the information about a graph can be
retrieved by reading one or several consecutive blocks from
disk. Often, when the graph is not large, a disk-page (e.g.,
of size 4k) can hold more than one graph.
Encoded edges recording the adjacency information are
linked to the graph-ids that are further associated with the
edges in the edge table.
4.2
Space Requirement
The storage of an ADI structure is flexible. If the graph
database is small, then the whole index can be held into
main memory. On the other hand, if the graph database
is large and thus the ADI structure cannot fit into main
319
Research Track Paper
memory, some levels can be stored on disk. The level of
adjacency information is the most detailed and can be put
on disk. If the main memory is too small to hold the graph-id
linked lists, they can also be accommodated on disk. In
the extreme case, even the edge table can be held on disk
and a B+-tree or hash index can be built on the edge table.
Theorem 1
(Space complexity). For graph database
GDB
=
{G
1
, . . . , G
n
},
the
space
complexity
is
O(
n
i=1
|E(G
i
)
|).
Proof. The space complexity is determined by the following
facts. (1) The number of tuples in the edge table is equal to
the number of distinct edges in the graph database, which is
bounded by
n
i=1
|E(G
i
)
|; (2) The number of entries in the
graph-id linked lists in the worst case is the number of edges
in the graph database, i.e.,
n
i=1
|E(G
i
)
| again; and (3) The
adjacency information part records every edge exactly once.
Please note that, in many application, it is reasonable to
assume that the edge table can be held into main memory.
For example, suppose we have 1, 000 distinct vertex labels
and 1, 000 distinct edge labels. There can be up to 1000
999
2 1000 = 4.995 10
8
different edges, i.e., all possible
combinations of vertex and edge labels. Suppose up to 1%
edges are frequent, there are only less than 5 million different
edges, and thus the edge table can be easily held into main
memory.
In real applications, the graphs are often sparse, that is,
not all possible combinations of vertex and edge labels appear
in the graphs as an edge. Moreover, users are often
interested in only those frequent edges. That shrinks the
edge table substantially.
4.3
Search Using ADI
Now, let us examine how the ADI structure can facilitate
the major data access operations in graph pattern mining
that are identified in Section 3.3.
OP1: Edge support checking Once an ADI structure is
constructed, this information is registered on the edge
table for every edge. We only need to search the edge
table, which is either indexed (when the table is on
disk) or can be searched using binary search (when
the table is in main memory).
In some cases, we may need to count the support of an
edge in a subset of graphs G
G. Then, the linked
list of the graph-ids of the edge is searched. There is
no need to touch any record in the adjacency information
part. That is, we do not need to search any detail
about the edges. Moreover, for counting supports of
edges in projected databases, we can maintain the support
of each edge in the current projected database and
thus we do not even search the graph-id linked lists.
OP2: Edge-host graph checking We only need to search
the edge table for the specific edge and follow the link
from the edge to the list of graph-ids. There is no
need to search any detail from the part of adjacency
information.
OP3: Adjacent edge checking Again, we start from an
entry in the edge table and follow the links to find
the list of graphs where the edge appears. Then, only
Input: a graph database GDB and min sup
Output: the ADI structure
Method:
scan GDB once, find the frequent edges;
initialize the edge table for frequent edges;
for each graph do
remove infrequent edges;
compute the mininmum DFS code [11];
use the DFS-tree to encode the vertices;
store the edges in the graph onto disk and form
the adjacency information;
for each edge do
insert the graph-id to the graph-id list
associated with the edge;
link the graph-id to the related adjacency
information;
end for
end for
Figure 4: Algorithm of
ADI construction.
the blocks containing the details of the instances of the
edge are visited, and there is no need to scan the whole
database. The average I/O complexity is O(log n +
m + l), where n is the number of distinct edges in the
graph, m is the average number of graph-ids in the
linked lists of edges, and l is the average number of
blocks occupied by a graph. In many applications, m
is orders of magnitudes smaller than the n, and l is a
very small number (e.g., 1 or 2).
The algorithms for the above operations are simple. Limited
by space, we omit the details here. As can be seen,
once the ADI structure is constructed, there is no need to
scan the database for any of the above operations. That is,
the ADI structure can support the random accesses and the
mining efficiently.
4.4
Construction of ADI
Given a graph database, the corresponding ADI structure
is easy to construct by scanning the database only twice.
In the first scan, the frequent edges are identified. According
to the apriori property of frequent graph patterns, only
those frequent edges can appear in frequent graph patterns
and thus should be indexed in the ADI structure. After the
first scan, the edge table of frequent edges is initialized.
In the second scan, graphs in the database are read and
processed one by one. For each graph, the vertices are encoded
according to the DFS-tree in the minimum DFS code,
as described in [11] and Section 3. Only the vertices involved
in some frequent edges should be encoded. Then, for each
frequent edge, the graph-id is inserted into the corresponding
linked list, and the adjacency information is stored. The
sketch of the algorithm is shown in Figure 4.
Cost Analysis
There are two major costs in the ADI construction: writing
the adjacency information and updating the linked lists of
graph-ids. Since all edges in a graph will reside on a disk
page or several consecutive disk pages, the writing of adjacency
information is sequential. Thus, the cost of writing
adjacency information is comparable to that of making a
320
Research Track Paper
4
3
2
1
D
C
A
B
a
d 1
d 3
d 4
b 2
a 1
b 3
d 4
a 2
4 C
3 D
2 B
1 A
d
d
b
(a) The graph and the adjacency-lists
1 A
2 B
3 D
4 C
1 A
0
a
0
d
2 B
a
0
b
0
3 D
0
b
0
d
4 C
d
0
d
0
(b) The adjacency-matrix
Figure 5: The adjacency-list and adjacency-matrix
representations of graphs.
copy of the original database plus some bookkeeping.
Updating the linked lists of graph-ids requires random
accesses to the edge table and the linked lists. In many
cases, the edge table can be held into main memory, but not
the linked list. Therefore, it is important to cache the linked
lists of graph-ids in a buffer. The linked lists can be cached
according to the frequency of the corresponding edges.
Constructing ADI for large, disk-based graph database
may not be cheap. However, the ADI structure can be built
once and used by the mining many times. That is, we can
build an ADI structure using a very low support threshold,
or even set min sup = 1.
2
The index is stored on disk.
Then, the mining in the future can use the index directly,
as long as the support threshold is no less than the one that
is used in the ADI structure construction.
4.5
Projected Databases Using ADI
Many depth-first search, pattern-growth algorithms utilize
proper projected databases. During the depth-first search
in graph pattern mining, the graphs containing the current
graph pattern P should be collected and form the P projected
database. Then, the further search of larger graph
patterns having P as the prefix of their minimum DFS codes
can be achieved by searching only the P -projected database.
Interestingly, the projected databases can be constructed
using ADI structures. A projected database can be stored
in the form of an ADI structure. In fact, only the edge table
and the list of graph-ids should be constructed for a new
projected database and the adjacency information residing
on disk can be shared by all projected databases.
That
can save a lot of time and space when mining large graph
databases that contain many graph patterns, where many
projected databases may have to be constructed.
4.6
Why Is ADI Good for Large Databases?
In most of the previous methods for graph pattern mining,
the adjacency-list or adjacency-matrix representations are
used to represent graphs. Each graph is represented by an
adjacency-matrix or a set of adjacency-lists. An example is
shown in Figure 5.
2
If min sup = 1, then the ADI structure can be constructed
by scanning the graph database only once. We do not need
to find frequent edges, since every edge appearing in the
graph database is frequent.
In Figure 5(a), the adjacency-lists have 8 nodes and 8
pointers. It stores the same information as Block 1 in Figure
3, where the block has 4 nodes and 12 pointers.
The space requirements of adjacency-lists and ADI structure
are comparable. From the figure, we can see that each
edge in a graph has to be stored twice: one instance for
each vertex. (If we want to remove this redundancy, the
tradeoff is the substantial increase of cost in finding adjacency
information). In general, for a graph of n edges, the
adjacency-list representation needs 2n nodes and 2n pointers
. An ADI structure stores each edge once, and use the
linkage among the edges from the same vertex to record the
adjacency information. In general, for a graph of n edges, it
needs n nodes and 3n pointers.
Then, what is the advantage of ADI structure against
adjacency-list representation? The key advantage is that the
ADI structure extracts the information about containments
of edges in graphs in the first two levels (i.e., the edge table
and the linked list of graph-ids). Therefore, in many operations
, such as the edge support checking and edge-host graph
checking, there is no need to visit the adjacency information
at all. To the contrast, if the adjacency-list representation
is used, every operation has to check the linked lists. When
the database is large so that either the adjacency-lists of all
graphs or the adjacency information in the ADI structure
cannot be accommodated into main memory, using the first
two levels of the ADI structure can save many calls to the
adjacency information, while the adjacency-lists of various
graphs have to be transferred between the main memory and
the disk many times.
Usually, the adjacency-matrix is sparse. The adjacency-matrix
representation is inefficient in space and thus is not
used.
ALGORITHM ADI-MINE
With the help from the ADI structure, how can we improve
the scalability and efficiency of frequent graph pattern
mining? Here, we present a pattern-growth algorithm ADI-Mine
, which is an improvement of algorithm gSpan. The
algorithm is shown in Figure 6.
If the ADI structure is unavailable, then the algorithm
scans the graph database and constructs the index. Otherwise
, it just uses the ADI structure on the disk.
The frequent edges can be obtained from the edge table in
the ADI structure. Each frequent edge is one of the smallest
frequent graph patterns and thus should be output. Then,
the frequent edges should be used as the "seeds" to grow
larger frequent graph patterns, and the frequent adjacent
edges of e should be used in the pattern-growth. An edge
e
is a frequent adjacent edge of e if e is an adjacent edge of
e in at least min sup graphs. The set of frequent adjacent
edges can be retrieved efficiently from the ADI structure
since the identities of the graphs containing e are indexed
as a linked-list, and the adjacent edges are also indexed in
the adjacency information part in the ADI structure.
The pattern growth is implemented as calls to procedure
subgraph-mine.
Procedure subgraph-mine tries every frequent
adjacent edge e (i.e., edges in set F
e
) and checks
whether e can be added into the current frequent graph pattern
G to form a larger pattern G . We use the DFS code to
test the redundancy. Only the patterns G whose DFS code
is minimum is output and further grown. All other patterns
G
are either found before or will be found later at other
321
Research Track Paper
Input: a graph database GDB and min sup
Output: the complete set of frequent graph patterns
Method:
construct the ADI structure for the graph database if
it is not available;
for each frequent edge e in the edge table do
output e as a graph pattern;
from the ADI structure, find set F
e
, the set of
frequent adjacent edges for e;
call subgraph-mine(e, F
e
);
end for
Procedure
subgraph-mine
Parameters: a frequent graph pattern G, and
the set of frequent adjacent edges F
e
// output the frequent graph patterns whose
// minimum DFS-codes contain that of G as a prefix
Method:
for each edge e in F
e
do
let G be the graph by adding e into G;
compute the DFS code of G ; if the DFS code is
not minimum, then return;
output G as a frequent graph pattern;
update the set F
e
of adjacent edges;
call subgraph-mine(G , F
e
);
end for
return;
Figure 6: Algorithm
ADI-Mine.
branches. The correctness of this step is guaranteed by the
property of DFS code [11].
Once a larger pattern G is found, the set of adjacent edges
of the current pattern should be updated, since the adjacent
edges of the newly inserted edge should also be considered
in the future growth from G . This update operation can be
implemented efficiently, since the identities of graphs that
contain an edge e are linked together in the ADI structure,
and the adjacency information is also indexed and linked
according to the graph-ids.
Differences Between ADI-Mine and gSpan
At high level, the structure as well as the search strategies
of ADI-Mine and gSpan are similar. The critical difference
is on the storage structure for graphs--ADI-Mine uses ADI
structure and gSpan uses adjacency-list representation.
In the recursive mining, the critical operation is finding
the graphs that contain the current graph pattern (i.e.,
the test of subgraph isomorphism) and finding the adjacent
edges to grow larger graph patterns.
The current graph
pattern is recorded using the labels. Thus, the edges are
searched using the labels of the vertices and that of the
edges.
In gSpan, the test of subgraph isomorphism is achieved
by scanning the current (projected) database.
Since the
graphs are stored in adjacency-list representation, and one
label may appear more than once in a graph, the search can
be costly. For example, in graph G
2
in Figure 3, in order
to find an edge (C, d, A), the adjacency-list for vertices 4
and 6 may have to be searched. If the graph is large and
the labels appear multiple times in a graph, there may be
many adjacency-lists for vertices of the same label, and the
adjacency-lists are long.
Moreover, for large graph database that cannot be held
into main memory, the adjacency-list representation of a
graph has to be loaded into main memory before the graph
can be searched.
In ADI-Mine, the graphs are stored in the ADI structure.
The edges are indexed by their labels. Then, the graphs that
contain the edges can be retrieved immediately. Moreover,
all edges with the same labels are linked together by the
links between the graph-id and the instances. That helps
the test of subgraph isomorphism substantially.
Furthermore, using the index of edges by their labels, only
the graphs that contain the specific edge will be loaded into
main memory for further subgraph isomorphism test. Irrelevant
graphs can be filtered out immediately by the index.
When the database is too large to fit into main memory, it
saves a substantial part of transfers of graphs between disk
and main memory.
EXPERIMENTAL RESULTS
In this section, we report a systematic performance study
on the ADI structure and a comparison of gSpan and ADI-Mine
on mining both small, memory-based databases and
large, disk-based databases. We obtain the executable of
gSpan from the authors. The ADI structure and algorithm
ADI-Mine are implemented using C/C++.
6.1
Experiment Setting
All the experiments are conducted on an IBM NetFinity
5100 machine with an Intel PIII 733MHz CPU, 512M RAM
and 18G hard disk. The speed of the hard disk is 10, 000
RPM. The operating system is Redhat Linux 9.0.
We implement a synthetic data generator following the
procedure described in [6]. The data generator takes five
parameters as follows.
D:
the total number of graphs in the data set
T :
the average number of edges in graphs
I:
the average number of edges in potentially frequent
graph patterns (i.e., the frequent kernels)
L:
the number of potentially frequent kernels
N :
the number of possible labels
Please refer to [6] for the details of the data generator.
For example, a data set D10kN 4I10T 20L200 means that
the data set contains 10k graphs; there are 4 possible labels;
the average number of edges in the frequent kernel graphs
is 10; the average number of edges in the graphs is 20; and
the number of potentially frequent kernels is 200. Hereafter
in this section, when we say "parameters", it means the
parameters for the data generator to create the data sets.
In [11], L is fixed to 200. In our experiments, we also set
L = 200 as the default value, but will test the scalability of
our algorithm on L as well.
Please note that, in all experiments, the runtime of ADI-Mine
includes both the ADI construction time and the mining
time.
6.2
Mining Main Memory-based Databases
In this set of experiments, both gSpan and ADI-Mine run
in main memory.
322
Research Track Paper
6.2.1
Scalability on Minimum Support Threshold
We test the scalability of gSpan and ADI-Mine on the minimum
support threshold. Data set D100kN 30I5T 20L200 is
used. The minimum support threshold varies from 4% to
10%. The results are shown in Figure 7(a).
As can be seen, both gSpan and ADI-Mine are scalable,
but ADI-Mine is about 10 times faster. We discussed the
result with Mr. X. Yan, the author of gSpan. He confirms
that counting frequent edges in gSpan is time consuming.
On the other hand, the construction of ADI structure is
relatively efficient. When the minimum support threshold
is set to 1, i.e., all edges are indexed, the ADI structure uses
approximately 57M main memory and costs 86 seconds in
construction.
6.2.2
Scalability on Database Size
We test the scalability of gSpan and ADI-Mine on the
size of databases. We fix the parameters N = 30, I = 5,
T = 20 and L = 200, and vary the number of graphs in
database from 50 thousand to 100 thousand. The minimum
support threshold is set to 1% of the number of graphs in
the database. The results are shown in Figure 7(b). The
construction time of ADI structure is also plotted in the
figure.
Both the algorithms and the construction of ADI structure
are linearly scalable on the size of databases. ADI-Mine
is faster. We observe that the size of ADI structure is also
scalable. For example, it uses 28M when the database has 50
thousand graphs, and 57M when the database has 100 thousand
graphs. This observation concurs with Theorem 1.
6.2.3
Effects of Data Set Parameters
We test the scalability of the two algorithms on parameter
N --the number of possible labels. We use data set
D100kN 20-50I5T 20L200, that is, the N value varies from
20 to 50. The minimum support threshold is fixed at 1%.
The results are shown in Figure 7(c). Please note that the
Y -axis is in logarithmic scale.
We can observe that the runtime of gSpan increases exponentially
as N increases. This result is consistent with
the result reported in [11].
3
When there are many possible
labels in the database, the search without index becomes
dramatically more costly. Interestingly, both ADI-Mine and
the construction of ADI structure are linearly scalable on N .
As discussed before, the edge table in ADI structure only indexes
the unique edges in a graph database. Searching using
the indexed edge table is efficient. The time complexity of
searching an edge by labels is O(log n), where n is the number
of distinct edges in the database. This is not affected by
the increase of the possible labels. As expected, the size of
the ADI structure is stable, about 57M in this experiment.
We use data set D100kN 30I5T 10-30L200 to test the scalability
of the two algorithms on parameter T --the average
number of edges in a graph. The minimum support threshold
is set to 1%. The results are shown in Figure 7(d).
As the number of edges increases, the graph becomes more
complex. The cost of storing and searching the graph also
increases accordingly. As shown in the figure, both algorithms
and the construction of ADI are linearly scalable.
We also test the effects of other parameters. The experimental
results show that both gSpan and ADI-Mine are not
3
Please refer to Figures 5(b) and 5(c) in the UIUC technical
report version of [11].
sensitive to I--the average number of edges in potentially
frequent graph patterns--and L--the number of potentially
frequent kernels. The construction time and space cost of
ADI structures are also stable. The reason is that the effects
of those two parameters on the distribution in the data
sets are minor. Similar observations have been reported by
previous studies on mining frequent itemsets and sequential
patterns. Limited by space, we omit the details here.
6.3
Mining Disk-based Databases
Now, we report the experimental results on mining large,
disk-based databases. In this set of experiments, we reserve
a block of main memory of fixed size for ADI structure.
When the size is too small for the ADI-structure, some levels
of the ADI structure are accommodated on disk. On the
other hand, we do not confine the memory usage for gSpan.
6.3.1
Scalability on Database Size
We test the scalability of both gSpan and ADI-Mine on the
size of databases. We use data set D100k-1mN 30I5T 20L200.
The number of graphs in the database is varied from 100
thousand to 1 million. The main memory block for ADI
structure is limited to 250M. The results are shown in Figure
8(a). The construction time of ADI structure is also
plotted. Please note that the Y -axis is in logarithmic scale.
The construction runtime of ADI structure is approximately
linear on the database size. That is, the construction
of the ADI index is highly scalable. We also measure
the size of ADI structure. The results are shown in Figure
8(b). We can observe that the size of the ADI structure
is linear to the database size. In this experiment, the ratio
size of ADI structure in megabytes
number of graphs in thousands
is about 0.6. When the
database size is 1 million, the size of ADI structure is 601M,
which exceeds the main memory size of our machine. Even
in such case, the construction runtime is still linear.
As explained before, the construction of ADI structure
makes sequential scans of the database and conducts a sequential
write of the adjacency information. The overhead
of construction of edge table and the linked lists of graph-ids
is relatively small and thus has a minor effect on the
construction time.
While gSpan can handle databases of only up to 300 thousand
graphs in this experiment, ADI-Mine can handle
databases of 1 million graphs. The curve of the runtime
of ADI-Mine can be divided into three stages.
First, when the database has up to 300 thousand graphs,
the ADI structure can be fully accommodated in main memory
. ADI-Mine is faster than gSpan.
Second, when the database has 300 to 600 thousand graphs,
gSpan cannot finish. The ADI structure cannot be fully held
in main memory. Some part of the adjacency information is
put on disk. We see a significant jump in the runtime curve
of ADI-Mine between the databases of 300 thousand graphs
and 400 thousand graphs.
Last, when the database has 800 thousand or more graphs,
even the linked lists of graph-ids cannot be fully put into
main memory. Thus, another significant jump in the runtime
curve can be observed.
6.3.2
Tradeoff Between Efficiency and Main Memory
Consumption
It is interesting to examine the tradeoff between efficiency
and size of available main memory.
We use data set
323
Research Track Paper
0
200
400
600
800
1000
1200
0
2
4
6
8
10
Runtime (second)
min_sup (%)
gSpan
ADI-Mine
0
200
400
600
800
1000
1200
50 55 60 65 70 75 80 85 90 95 100
Runtime (second)
Number of graphs (thousand)
gSpan
ADI-Mine
ADI-construction
10
100
1000
10000
100000
20
25
30
35
40
45
50
Runtime (second)
N
gSpan
ADI-Mine
ADI-construction
0
500
1000
1500
2000
10
15
20
25
30
Runtime (second)
T
gSpan
ADI-Mine
ADI-construction
(a) scalability on min sup
(b) Scalability on size
(c) Scalability on N
(d) Scalability on T
D100kN 30I5T 20L200
D50-100kN 30I5T 20L200
D100kN 20-50I5T 20L200
D100kN 30I5T 10-30L200
min sup
= 1%
min sup = 1%
min sup = 1%
Figure 7: The experimental results of mining main memory-based databases.
100
1000
10000
100000
100 200 300 400 500 600 700 800 900 1000
Runtime (second)
Number of graphs (thousand)
gSpan
ADI-Mine
ADI-construction
0
100
200
300
400
500
600
700
100 200 300 400 500 600 700 800 900 1000
Size (M)
Number of graphs (thousand)
ADI structure
0
200
400
600
800
1000
0
20
40
60
80 100 120 140 160
Runtime (s)
Size of available main memory (M)
ADI-Mine
(a) scalability on size
(b) Size of ADI structure
(c) Runtime vs. main memory
D100k-1mN 30I5T 20L200
D100k-1mN 30I5T 20L200
D100kN 30I5T 20L200
min sup
= 1%
min sup = 1%
min sup = 1%
Figure 8: The experimental results of mining large disk-based databases.
D100kN 30I5T 20L200, set the minimum support threshold
to 1%, vary the main memory limit from 10M to 150M for
ADI structure, and measure the runtime of ADI-Mine. The
results are shown in Figure 8(c). In this experiment, the
size of ADI structure is 57M. The construction time is 86
seconds. The highest watermark of main memory usage for
gSpan in mining this data set is 87M. gSpan uses 1161 seconds
in the mining if it has sufficient main memory.
When the ADI structure can be completely loaded into
main memory (57M or larger), ADI-Mine runs fast. Further
increase of the available main memory cannot reduce the
runtime.
When the ADI structure cannot be fully put into main
memory, the runtime increases. The more main memory,
the faster ADI-Mine runs.
When the available main memory is too small to even
hold the linked lists of graph-ids, the runtime of ADI-Mine
increases substantially. However, it still can finish the mining
with 10M main memory limit in 2 hours.
6.3.3
Number of Disk Block Reads
In addition to runtime, the efficiency of mining large disk-based
databases can also be measured by the number of disk
block read operations.
Figure 9(a) shows the number of disk block reads versus
the minimum support threshold. When the support threshold
is high (e.g., 9% or up), the number of frequent edges
is small. The ADI structure can be held into main memory
and thus the I/O cost is very low. As the support threshold
goes down, larger and larger part of the ADI structure is
stored on disk, and the I/O cost increases. This curve is
consistent with the trend in Figure 7(a).
Figure 9(b) shows the number of disk block reads versus
the number of graphs in the database. As the database size
goes up, the I/O cost increases exponentially. This explains
the curve of ADI-Mine in Figure 8(a).
We also test the I/O cost on available main memory. The
result is shown in Figure 9(c), which is consistent with the
trend of runtime curve in Figure 8(c).
6.3.4
Effects of Other Parameters
We also test the effects of the other parameters on the
efficiency. We observe similar trends as in mining memory-based
databases. Limited by space, we omit the details here.
6.4
Summary of Experimental Results
The extensive performance study clearly shows the following
. First, both gSpan and ADI-Mine are scalable when
database can be held into main memory. ADI-Mine is faster
than gSpan. Second, ADI-Mine can mine very large graph
databases by accommodating the ADI structure on disk.
The performance of ADI-Mine on mining large disk-based
databases is highly scalable. Third, the size of ADI structure
is linearly scalable with respect to the size of databases.
Fourth, we can control the tradeoff between the mining efficiency
and the main memory consumption. Last, ADI-Mine
is more scalable than gSpan in mining complex graphs--the
graphs that have many different kinds of labels.
RELATED WORK
The problem of finding frequent common structures has
been studied since early 1990s. For example, [1, 7] study the
the problem of finding common substructures from chemical
compounds. SUBDUE [4] proposes an approximate algorithm
to identify some, instead of the complete set of,
324
Research Track Paper
0
100000
200000
300000
400000
500000
600000
700000
0
2
4
6
8
10
Number of blocks read
min_sup (%)
ADI-Mine
0
5e+07
1e+08
1.5e+08
2e+08
2.5e+08
0
200
400
600
800
1000
Number of blocks read
Number of graphs (thousand)
ADI-Mine
0
5e+06
1e+07
1.5e+07
2e+07
2.5e+07
3e+07
3.5e+07
4e+07
0
20
40
60
80 100 120 140 160
Number of blocks read
Size of available main memory (M)
ADI-Mine
(a) # blocks vs. support threshold
(b) # blocks vs. database size
(c) # blocks vs. main memory size
D100kN 30I5T 20L200
D100k-1mN 30I5T 20L200
D100kN 30I5T 20L200
min sup
= 1%
min sup = 1%
Figure 9: The number of disk blocks read in the mining.
frequent substructures. However, these methods do not aim
at scalable algorithms for mining large graph databases.
The problem of mining the complete set of frequent graph
patterns is firstly explored by Inokuchi et al. [5]. An Apriori-like
algorithm AGM is proposed. Kuramochi and Karypis [6]
develop an efficient algorithm, FSG, for graph pattern mining
. The major idea is to utilize an effective graph representation
, and conduct the edge-growth mining instead of
vertex-growth mining. Both AGM and FSG adopt breadth-first
search.
Recently, Yan and Han propose the depth-first search approach
, gSpan [11] for graph mining. They also investigate
the problem of mining frequent closed graphs [9], which is
a non-redundant representation of frequent graph patterns.
As a latest result, Yan et al. [10] uses frequent graph patterns
to index graphs.
As a special case of graph mining, tree mining also receives
intensive research recently.
Zaki [12] proposes the
first algorithm for mining frequent tree patterns.
Although there are quite a few studies on the efficient mining
of frequent graph patterns, none of them addresses the
problem of effective index structure for mining large disk-based
graph databases. When the database is too large to
fit into main memory, the mining becomes I/O bounded,
and the appropriate index structure becomes very critical
for the scalability.
CONCLUSIONS
In this paper, we study the problem of scalable mining
of large disk-based graph database. The ADI structure, an
effective index structure, is developed. Taking gSpan as a
concrete example, we propose ADI-Mine, an efficient algorithm
adopting the ADI structure, to improve the scalability
of the frequent graph mining substantially.
The ADI-Mine structure is a general index for graph mining
. As future work, it is interesting to examine the effect of
the index structure on improving other graph pattern mining
methods, such as mining frequent closed graphs and mining
graphs with constraints. Furthermore, devising index structures
to support scalable data mining on large disk-based
databases is an important and interesting research problem
with extensive applications and industrial values.
Acknowledgements
We are very grateful to Mr. Xifeng Yan and Dr. Jiawei Han
for kindly providing us the executable of gSpan and answering
our questions promptly. We would like to thank the
anonymous reviewers for their insightful comments, which
help to improve the quality of the paper.
REFERENCES
[1] D.M. Bayada, R. W. Simpson, and A. P. Johnson. An
algorithm for the multiple common subgraph problem. J. of
Chemical Information & Computer Sci., 32:680685, 1992.
[2] C. Borgelt and M.R. Berthold. Mining molecular
fragments: Finding relevant substructures of molecules. In
Proc. 2002 Int. Conf. Data Mining (ICDM'02), Maebashi
TERRSA, Maebashi City, Japan, Dec. 2002.
[3] Thomas H. Cormen, Charles E. Leiserson, Ronald L.
Rivest, and Clifford Stein. Introduction to Algorithms,
Second Edition. MIT Press and McGraw-Hill, 2002.
[4] L. B. Holder, D. J. Cook, and S. Djoko. Substructure
discovery in the subdue system. In Proc. AAAI'94
Workshop Knowledge Discovery in Databases (KDD'94),
pages 359370, Seattle, WA, July 1994.
[5] A. Inokuchi, T. Washio, and H. Motoda. An apriori-based
algorithm for mining frequent substructures from graph
data. In Proc. 2000 European Symp. Principle of Data
Mining and Knowledge Discovery (PKDD'00), pages
1323, Lyon, France, Sept. 2000.
[6] M. Kuramochi and G. Karypis. Frequent subgraph
discovery. In Proc. 2001 Int. Conf. Data Mining
(ICDM'01), pages 313320, San Jose, CA, Nov. 2001.
[7] Y. Takahashi, Y. Satoh, and S. Sasaki. Recognition of
largest common fragment among a variety of chemical
structures. Analytical Sciences, 3:2338, 1987.
[8] N. Vanetik, E. Gudes, and S.E. Shimony. Computing
frequent graph patterns from semistructured data. In Proc.
2002 Int. Conf. Data Mining (ICDM'02), Maebashi
TERRSA, Maebashi City, Japan, Dec. 2002.
[9] X. Yan and J. Han. Closegraph: Mining closed frequent
graph patterns. In Proceedings of the 9th ACM SIGKDD
International Conference on Knowledge Discovery and
Data Mining (KDD'03), Washington, D.C, 2003.
[10] X. Yan, P.S. Yu, and J. Han. Graph indexing: A frequent
structure-based approach. In Proc. 2004 ACM SIGMOD
Int. Conf. on Management of Data (SIGMOD'04), Paris,
France, June 2004.
[11] Y. Yan and J. Han. gspan: Graph-based substructure
pattern mining. In Proc. 2002 Int. Conf. on Data Mining
(ICDM'02), Maebashi, Japan, December 2002.
[12] M.J. Zaki. Efficiently mining frequent trees in a forest. In
Proc. 2002 Int. Conf. on Knowledge Discovery and Data
Mining (KDD'02), Edmonton, Alberta, Canada, July 2002.
325
Research Track Paper
| index;Edge table;Graph mining;Subgraph mine;Frequent graph pattern mining;Adjacency list representation;graph database;DFS code;ADI Index structure;frequent graph pattern;Gspan algorithm;Disk bases databases;GRaph databases;Memory based databases |
174 | Secure Access to IP Multimedia Services Using Generic Bootstrapping Architecture (GBA) for 3G & Beyond Mobile Networks | The IP Multimedia Subsystem (IMS) defined by Third Generation Partnership Projects (3GPP and 3GPP2) is a technology designed to provide robust multimedia services across roaming boundaries and over diverse access technologies with promising features like quality-of-service (QoS), reliability and security. The IMS defines an overlay service architecture that merges the paradigms and technologies of the Internet with the cellular and fixed telecommunication worlds. Its architecture enables the efficient provision of an open set of potentially highly integrated multimedia services, combining web browsing, email, instant messaging, presence, VoIP, video conferencing, application sharing, telephony, unified messaging, multimedia content delivery, etc. on top of possibly different network technologies. As such IMS enables various business models for providing seamless business and consumer multimedia applications. In this communication converged world, the challenging issues are security, quality of service (QoS) and management & administration. In this paper our focus is to manage secure access to multimedia services and applications based on SIP and HTTP on top of IP Multimedia Subsystem (IMS). These services include presence, video conferencing, messaging, video broadcasting, and push to talk etc. We will utilize Generic Bootstrapping Architecture (GBA) model to authenticate multimedia applications before accessing these multimedia services offered by IMS operators. We will make enhancement in GBA model to access these services securely by introducing Authentication Proxy (AP) which is responsible to implement Transport Layer Security (TLS) for HTTP and SIP communication. This research work is part of Secure Service Provisioning (SSP) Framework for IP Multimedia System at Fokus Fraunhofer IMS 3Gb Testbed. | Introduction
With the emergence of mobile multimedia services, such as
unified messaging, click to dial, across network multiparty
conferencing and seamless multimedia streaming services, the
convergence of networks (i.e. fixedmobile convergence and
voicedata integration) has started, leading to an overall Internet
Telecommunications convergence. In prospect of these global
trends, the mobile communications world has defined within the
evolution of cellular systems an All-IP Network vision which
integrates cellular networks and the Internet. This is the IP
Multimedia System (IMS) [1], namely overlay architecture for the
provision of multimedia services, such as VoIP (Voice over
Internet Protocol) and videoconferencing on top of globally
emerging 3G (Third Generation) broadband packet networks. The
IP Multimedia System (IMS) which is standardized by Third
Generation Partnership Project (3GPP & 3GGP2) in releases 5 is
an overlay network on top of GPRS/UMTS (General Packet
Radio Systems/Universal Mobile Telecommunication Systems)
networks and extended by ETSI TISPAN [2] for fixed line access
network within the Next Generation Network (NGN) architecture.
The IMS provides all IP Service Delivery Platform (SDP) for
mobile multimedia services provisioning e.g. VoIP, Video-telephony
, Multimedia conferencing, Mobile Content, Push-to-Talk
etc. and it is based on IETF protocols like SIP for session
control, Diameter for AAA (Authentication, Authorization, and
Auditing) and SDP (Service Delivery Protocol), RTP etc.
Different components and parts of IMS are highlighted in figure 1
consisting IMS Core (P-CSCF, I-CSCF, S-CSCF), IMS Client
(UE) and Application & Media Servers along with the concept of
home network and visited network for roaming users on top of
different access networks technologies.
The security and data privacy is a big challenge when there is
integration of different networks and technologies. The
integration of different access technologies causes much
vulnerability and hackers get access to steal financial and
confidential information. As these hackers networks are often
beyond the law enforcement agencies of the today's
communication world. So the question arises how to prevent these
hackers for performing such attacks on the corporate networks. In
order to provide confidentiality, security and privacy, the 3G
authentication infrastructure is a valuable and milestone
development and asset for 3G operators. This infrastructure
consists of authentication centre (AuC), the USIM (Universal
Subscriber Identity Module) or ISIM (IP Multimedia Services
Identity Module) and AKA (Authentication and Key Agreement)
Procedure.
It has recognized that this infrastructure could utilize to enable
application function in the network and on the user side to enable
shared keys. Therefore, Third Generation Partnership Project
(3GPP) has provided the bootstrapping of application security to
authenticate the subscriber by defining a Generic Bootstrapping
Architecture (GBA) [3] based on Authentication and Key
Agreement (AKA) protocol. The GBA model can be utilized to
authenticate subscriber before accessing multimedia services and
applications over HTTP. The candidate applications to use this
bootstrapping mechanism include but are not restricted to
subscriber certificate distribution. These certificates supports
services including presence, conferencing, messaging and push to
talk etc. provided by mobile operators. The GBA model has
enhanced by implementing Generic Authentication Architecture
(GAA) [4] to provide secure assess over HTTP using TLS
(Transport Layer Security).
In prospective of the advancement of telecommunication, the
Fraunhofer Fokus established a Third Generation & beyond (3Gb)
Testbed and IMS Testbed [5] for research & development and
educational purpose to provide state-of-the-art knowledge to
engineers, researchers, educationists and technologists in this area
of modern telecommunication. Fokus Fraunhofer has developed a
Secure Service Provisioning (SSP) Framework [6] for IMS
Testbed to provide security, privacy and authentication of
subscriber as well as confidential and protection to the network
resources of 3G operators.
The paper is organised as: section 2 is about IMS as platform for
multimedia services, sections 3, 4 and 5 explain generic
bootstrapping architecture, bootstrapping authentication
procedure and its application usage procedure respectively.
Section 6 discusses the use of authentication proxy for
implementing TLS for securing multimedia services. In section 7,
we will discus briefly the IMS Testbed at Fokus and than
concludes the paper in last section.
IMS - Platform for Next Generation Multimedia Services
The IMS defines service provision architecture, and it can be
considered as the next generation service delivery platform. It
consists of modular design with open interfaces and enables the
flexibility for providing multimedia services over IP technology.
The IMS does not standardize specific services but uses standard
service enablers e.g. presence, GLMS/XDMS etc. and supports
inherently multimedia over IP, VoIP, Internet Multimedia and
presence. In IMS architecture, SIP protocol use as the standard
signaling protocol that establishes controls, modifies and
terminates voice, video and messaging sessions between two or
more participants. The related signaling servers in the architecture
are referred to as Call State Control Functions (CSCFs) and
distinguished by their specific functionalities. It is important to
note that an IMS compliant end user system has to provide the
necessary IMS protocol support, namely SIP, and the service
related media codecs for multimedia applications in addition to
basic connectivity support, e.g. GPRS, WLAN, etc. The IMS is
designed to provide number of key capabilities required to enable
new IP services via mobile and fixed networks. The important key
functionalities which enable new mobile IP services are:
Multimedia session negotiation and management
Quality of service management
Mobility
management
Service execution, control and interaction
Privacy and security management
Figure 1:- IP Multimedia Subsystem (IMS) Architecture
In IMS specification, Application Server (AS) provides the
service logic and service creation environment for applications
and services. The AS is intended to influence and maintain the
various IMS SIP sessions on behalf of the services. It can behave
as a termination point for signaling, redirecting or forwarding SIP
requests. It also can act as third party call control unit. Services in
this instance refer to IMS services, which are based on the IMS
reference points (e.g. instant messaging, presence, conferencing
etc.). The advantage of application server is to enable IMS to
operate in a more flexible and dynamic way, whereas the AS
provides more intelligence to the system. Most Application
Servers are closed boxes which map network functions (e.g. via
OSA gateways) or signaling protocols (SIP) onto application
programming interfaces based on a particular technology (Java,
18
CORBA, web-services). An alternative approach pursued by the
Open Mobile Alliance (OMA) is strongly related to the service
oriented methodology, which follows the top-down approach
beginning with service design down to service mapping over the
underlying network technologies. The SIP services can be
developed and deployed on a SIP application server using several
technologies such as SIP servlets, Call Processing Language
(CPL) script, SIP Common Gateway Interface (CGI) and JAIN
APIs.
Generic Bootstrapping Architecture (GBA)
Different 3G Multimedia Services including video conferencing,
presence, push to talk etc. has potential usage of Generic
Bootstrapping Architecture (GBA) to distribute subscriber
certificates. These certificates are used by mobile operators to
authenticate the subscriber before accessing the multimedia
services and applications. Now we discuss components, entities
and interfaces of GBA.
3.1 GBA Components and Entities
The GBA consists of five entities: UE (User Equipment), NAF
(Network Authentication Function), BSF (Bootstrapping Server
Function) and HSS (Home Subscriber Server) and are explained
below as specified in 3GPP standards (shown in figure 2).
User Equipment: UE is UICC (Universal Integrated Circuit
Card) containing USIM or ISIM related information that supports
HTTP Digest AKA (Authentication & Key Agreement) and NAF
(Network Authentication Function) specific protocols. A USIM
(Universal Subscriber Identity Module) is an application for
UMTS mobile telephony running on a UICC smartcard which is
inserted in a 3G mobile phone. It stores user subscriber
information, authentication information and provides with storage
space for text messages. An IP Multimedia Services Identity
Module (ISIM) is an application running on a UICC smartcard in
a 3G telephone in the IP Multimedia Subsystem (IMS). It contains
parameters for identifying and authenticating the user to the IMS.
The ISIM application can co-exist with SIM and USIM on the
same UICC making it possible to use the same smartcard in both
GSM networks and earlier releases of UMTS.
Bootstrapping Server Function (BSF): It hosts in a network
element under the control of mobile network operator. The BSF,
HSS, and UEs participate in GBA in which a shared secret is
established between the network and a UE by running the
bootstrapping procedure. The shared secret can be used between
NAFs and UEs, for example, for authentication purposes. A
generic Bootstrapping Server Function (BSF) and the UE shall
mutually authenticate using the AKA protocol, and agree on
session keys that are afterwards applied between UE and a
Network Application Function (NAF). The BSF shall restrict the
applicability of the key material to a specific NAF by using the
key derivation procedure. The key derivation procedure may be
used with multiple NAFs during the lifetime of the key material.
The lifetime of the key material is set according to the local policy
of the BSF. The BSF shall be able to acquire the GBA User
security Settings (GUSS) from HSS [3].
Figure 2: Network Entities of GBA
Network Authentication Function: NAF has the functionality to
locate and communicate securely with subscriber's BSF
(Bootstrapping Server Function). It should be able to acquire a
shared key material established between the UE and the BSF
during application specific protocol runs.
Home Subscriber Server: HSS stores GBA user security settings
(GUSSs). The GUSS is defined in such a way that interworking of
different operators for standardized application profiles is
possible. It also supports operator specific application profiles
without the standardized of existing application profiles. The
GUSS shall be able to contain application-specific USSs that
contain parameters that relates to key selection indication,
identification or authorization information of one or more
applications hosted by one ore more NAFs. Any other types of
parameters are not allowed in the application-specific USS [3].
Diameter-Proxy: In case where UE has contacted NAF of visited
network than home network, this visited NAF will use diameter
proxy (D-Proxy) of NAFs network to communicate with
subscriber's BSF (i.e. home BSF). D-Proxy's general
functionality requirements include [3]:
D-Proxy functions as a proxy between visited NAF and
subscriber's home BSF and it will be able to locate subscriber's
home BSF and communicate with it over secure channel.
The D-Proxy will be able to validate that the visited NAF is
authorized to participate in GBA and shall be able to assert to
subscriber's home BSF the visited NAFs DNS name.
The D-Proxy shall also be able to assert to the BSF that the
visited NAF is authorized to request the GBA specific user
profiles contained in the NAF request.
19
Figure 3: Bootstrapping Authentication Procedure
3.2 GBA Reference Points
Ub: The reference point Ub is between the UE and the BSF and
provides mutual authentication between them. It allows the UE to
bootstrap the session keys based on 3GPP AKA infrastructure.
The HTTP Digest AKA protocol is used on the reference point
Ub. It is based on the 3GPP AKA [7] protocol.
Ua: The reference point Ua carries the application protocol, which
is secured using the keys material agreed between UE and BSF as
a result of running of HTTP Digest AKA over reference point Ub.
For instance, in case of support for subscriber certificates, it is a
protocol, which allows the user to request certificates from NAF.
In this case, NAF would be PKI portal.
Zh: The reference point Zh used between BSF and HSS. It allows
BSF to fetch the required authentication information and all GBA
user security settings from HSS. The interface to 3G
Authentication Centre is HSS-internal, and it need not be
standardised as part of this architecture.
Zn: The reference point Zn is used by the NAF to fetch the key
material agreed during a previous HTTP Digest AKA protocol run
over the reference point Ub from the UE to the BSF. It is also
used to fetch application-specific user security settings from the
BSF, if requested by the NAF.
Bootstrapping Authentication Procedure
The UE and Network Authentication Function (NAF) have to
decide whether to use GBA before the start of communication
between them. When UE wants to interact with NAF, it starts
communication with NAF over Ua interface without GBA
parameters. If NAF requires the use of shared keys obtained by
means of GBA, but the request from UE does not include GBA-related
parameters, the NAF replies with a bootstrapping initiation
message [3]. When UE wants to interact with NAF, and it knows
20
that the bootstrapping procedure is needed, it shall first perform a
bootstrapping authentication as shown in figure 3. Otherwise, the
UE shall perform a bootstrapping authentication only when it has
received bootstrapping initiation required message or a
bootstrapping negotiation indication from the NAF, or when the
lifetime of the key in UE has expired. The UE sends an HTTP
request to the BSF and the BSF retrieves the complete set of GBA
user security settings and one Authentication Vector (AV) [8] as
given in equation 1 over the reference point Zh from the HSS.
AV = RAND||AUTN||XRES||CK||IK
------------------- Eq. 1
After that BSF forwards the RAND and AUTN to the UE in the
401 message without the CK, IK and XRES. This is to demand
the UE to authenticate itself. The UE checks AUTN to verify that
the challenge is from an authorized network; the UE also
calculates CK, IK and RES [8]. This will result in session keys IK
and CK in both BSF and UE. The UE sends another HTTP
request to the BSF, containing the Digest AKA response which is
calculated using RES.
The BSF authenticates the UE by verifying the Digest AKA
response. The BSF generates key material Ks by concatenating
CK and IK and it also generates B-TID (Bootstrapping
Transaction Identifier) which is used to bind the subscriber
identity to the keying material in reference points Ua, Ub and Zn.
The BSF shall send a 200 OK message, including a B-TID to the
UE to indicate the success of the authentication and the lifetime of
the key Ks. The key material Ks is generated in UE by
concatenating CK and IK. Both the UE and the BSF shall use the
Ks to derive the key material Ks-NAF which will be used for
securing the reference point Ua. The Ks-NAF is computed as
equation 2.
Ks-NAF = f
KD
(Ks, "gba-me", RAND, IMPI, NAF-ID) ----- Eq. 2
where f
KD
is the key derivation function and will be implemented
in the ME, and the key derivation parameters consist of user's
IMPI, NAF-ID and RAND. The NAF-ID consists of the full DNS
name of the NAF, concatenated with the Ua security protocol
identifier. The UE and the BSF shall store the key Ks with the
associated B-TID for further use, until the lifetime of Ks has
expired, or until the key Ks is updated [3].
Bootstrapping Usage Procedure
Before communication between the UE and the NAF can start, the
UE and the NAF first have to agree whether to use shared keys
obtained by means of the GBA. If the UE does not know whether
to use GBA with this NAF, it uses the initiation of bootstrapping
procedure. Once the UE and the NAF have decided that they want
to use GBA then every time the UE wants to interact with NAF.
The UE starts communication over reference point Ua with the
NAF by supplying the B-TID to the NAF to allow the NAF to
retrieve the corresponding keys from the BSF. The NAF starts
communication over reference point Zn with BSF. The NAF
requests key material corresponding to the B-TID supplied by the
UE to the NAF over reference point Ua. With the key material
request, the NAF shall supply NAF's public hostname that UE has
used to access NAF to BSF, and BSF shall be able to verify that
NAF is authorized to use that hostname. The NAF may also
request one or more application-specific USSs for the
applications, which the request received over Ua from UE may
access.
The BSF derives the keys required to protect the protocol used
over reference point Ua from the key Ks and the key derivation
parameters. Than it supplies requested key Ks-NAF,
bootstrapping time and the lifetime of the key to NAF. If the key
identified by the B-TID supplied by the NAF is not available at
the BSF, the BSF shall indicate this in the reply to the NAF. The
NAF then indicates a bootstrapping renegotiation request to the
UE. The BSF may also send the private user identity (IMPI) and
requested USSs to NAF according to the BSF's policy. The NAF
continues with the protocol used over the reference point Ua with
the UE. Once the run of the protocol used over reference point Ua
is completed the purpose of bootstrapping is fulfilled as it enabled
UE and NAF to use reference point Ua in a secure way.
Figure 4: Bootstrapping Application
Authentication Proxy Usage for Multimedia Services
Authentication Proxy (AP) is like a Network Authentication
Function (NAF) and performs the function of HTTP proxy for the
UE. It is responsible to handle the Transport Layer Security (TLS)
and implement the secure HTTP channel between AP and UE as
shown in figure 5. It utilizes the generic bootstrapping
architecture to assure the application servers (ASs) that the
request is coming from an authorized subscriber of mobile
21
network operator. When HTTPS request is sent to AS through
AP, the AP performs UE authentication. The AP may insert the
user identity when it forwards the request to application server.
Figure 5b presents the architecture view of using AP for different
IMS SIP services e.g. presence, messaging, conferencing etc.
Figure 5: Authentication Proxy
The UE shall manipulate own data such as groups, through the
Ua/Ut reference point [4]. The reference point Ut will be
applicable to data manipulation of IMS based SIP services, such
as Presence, Messaging and Conferencing services. When the
HTTPS client starts communication via Ua reference point with
the NAF, it shall establish a TLS tunnel with the NAF. The NAF
is authenticated to the HTTPS client by means of a public key
certificate. The HTTPS client will verify that the server certificate
corresponds to the FQDN (Fully Qualified Domain Name) of the
AP it established the tunnel with. We explain the procedure
briefly as:
The HTTPS client sends an HTTP request to NAF inside the TLS
tunnel. In response to HTTP request over Ua interface, the AP
will invoke HTTP digest with HTTPS client in order to perform
client authentication using the shared keys. On the receipt of
HTTPS digest from AP, the client will verify that the FDQN
corresponds the AP it established the TLS connection with, if not
the client will terminate the TLS connection with the AP. In this
way the UE and AP are mutually authenticated as the TLS tunnel
endpoints.
Now we discuss an example that application residing on UICC
(Universal Integrated Circuit Card) may use TLS over HTTP in
Generic Authentication Architecture (GAA) mechanism to secure
its communication with Authentication Proxy (AP). The GBA
security association between a UICC-based application and AP
could establish as:
Figure 6: HTTPS and BIP (Bearer Independent Protocol)
Procedures
The ME (Mobile Equipment) executes the bootstrapping
procedure with the BSF supporting the Ub reference point. The
UICC, which hosts the HTTPS client, runs the bootstrapping
usage procedure with AP supporting the Ua reference point [9].
Figure 6 shows the use of BIP (Bearer Independent Protocol) to
establish the HTTPS connection between UICC and AP. When
UICC opens channel with AP as described in [10] than an active
TCP/IP connection establishes between UICC and AP.
Fokus IMS Testbed
In face of the current challenges within telecommunications
market are mainly consequences of insufficient early access to
new enabling technologies by all market players. , In this
development Fraunhofer Institute FOKUS, known as a leading
research institute in the field of open communication systems, has
established with support of German Ministry of Education and
Research (BMBF) a 3G beyond Testbed, known as "National
Host for 3Gb Applications". This Testbed provides technologies
and related know-how in the field of fixed and wireless next
generation network technologies and related service delivery
22
platforms. As a part of 3Gb Testbed, the FOKUS Open IMS
Playground is deployed as an open technology test field with the
target to validate existing and emerging IMS standards and to
extend the IMS appropriately to be used on top of new access
networks as well as to provide new seamless multimedia
applications [11]. All major IMS core components, i.e., x-CSCF,
HSS, MG, MRF, Application Servers, Application Server
Simulators, service creation toolkits, and demo applications are
integrated into one single environment and can be used and
extended for R&D activities by academic and industrial partners.
All these components can be used locally on top of all available
access technologies or can be used over IP tunnels remotely.
Users of the "Open IMS playground" can test their components
performing interoperability tests. The SIP Express Router (SER),
one of the fastest existing SIP Proxies, can be used as a reference
implementation and to proof interoperability with other SIP
components [11]. The major focal point of IMS Playground is to
put Application Server aside. Varieties of platforms enable rapid
development of innovative services.
Figure 7: Fokus IMS Testbed
The Open IMS playground is deployed as an open technology test
field with the target to develop prototype and validate existing and
emerging NGN/IMS standard components. It extends the IMS
architecture and protocols appropriately to be used on top of new
access networks as well as to provide new seamless multimedia
applications. It is important to stress that all components have
been developed by FOKUS as reference implementations, such as
an own open source IMS core system (to be publicly released in
2006 based on the famous SIP Express Router), IMS Clients and
application servers (SIPSee), and HSS. The IMS playground is
used on the one hand as the technology basis for own industry
projects performed for national and international vendors and
network operators as well as for more mid term academic R&D
projects in the European IST context. In addition, the playground
is used by others as well, i.e. FOKUS is providing consultancy
and support services around the IMS playground. Users of the
"Open IMS playground", e.g. vendors, are testing their
components performing interoperability and benchmarking tests.
Application developers are developing new IMS applications
based on various programming platforms provided, i.e.
IN/CAMEL, OSA/Parlay, JAIN, SIP Servlets, etc., and gain a
proof of concept implementation.. The different platform options,
each with their strengths and weaknesses, can be selected and
used according to the customers' needs. Figure 7 displays the
Open IMS playground partner components.
Conclusion
In this paper, we have presented the architecture of secure access
and authentication of IP Multimedia Services based of SIP and
HTTP communication using GBA (Generic Bootstrapping
Architecture) as recommended by 3GPP and TISPAN as a part of
Secure Service Provisioning (SSP) Framework of IMS at Fokus
Fraunhofer IMS and 3Gb Testbed.
REFERENCES
[1] Third Generation Partnership Project; Technical
Specification Group Services and System Aspects; TS
23.228 IP Multimedia Subsystem (IMS), Stage 2 / 3GPP2
X.S0013-002-0 v1.0, www.3gpp.org.
[2] ETSI TISPAN (Telecommunications and Internet converged
Services and Protocols for Advanced Networking) WG
http://portal.etsi.org/tispan/TISPAN_ToR.asp.
[3] Third Generation Partnership Project; Technical
Specification Group Services and System Aspects; Generic
Authentication Architecture (GAA); Generic Bootstrapping
Architecture (GBA) (Release 7), 3GPP TS 33.220 V7
(2005).
[4] Third Generation Partnership Project; Technical
Specification Group Services and System Aspects; Generic
Authentication Architecture (GAA); Access to Network
Application Functions using Hypertext Transfer Protocol
over Transport Layer Security (HTTPS) (Release 7), 3GPP
TS 33.222 V7 (2005).
[5] Third Generation & Beyond (3Gb) Testbed,
www.fokus.fraunhofer.de/national_host &
IP Multimedia System (IMS) Playground
www.fokus.fraunhofer.de/ims.
[6] M. Sher, T. Magedanz, "Secure Service Provisioning
Framework (SSPF) for IP Multimedia System and Next
Generation Mobile Networks" 3rd International Workshop in
Wireless Security Technologies, London, U.K. (April 2005),
IWWST'05 Proceeding (101-106), ISSN 1746-904X.
[7] Third Generation Partnership Project; Technical
Specification Group Services and System Aspects; 3G
Security; Security Architecture (Release 6); 3GPP, TS
33.102 V6 (2004).
[8] M. Sher, T. Magedanz: "Network Access Security
Management (NASM) Model for Next Generation Mobile
Telecommunication Networks", IEEE/IFIP MATA'2005, 2
nd
International Workshop on Mobility Aware Technologies
and Applications - Service Delivery Platforms for Next
Generation Networks, Montreal, Canada, October 17-19,
2005, Proceeding Springer-Verlag LNCS 3744-0263, Berlin
23
Heidelberg 2005, pp. 263-272.
http://www.congresbcu.com/mata2005
[9] Third Generation Partnership Project; Technical
Specification Group Services and System Aspects; Generic
Authentication Architecture (GAA); Early Implementation of
HTTPS Connection between a Universal Integrated Circuit
Card (UICC) and Network Application Function (NAF)
(Release 7), 3GPP TR 33.918 V7 (2005).
[10] Third Generation Partnership Project; Technical
Specification Group Core Network and Terminals; Universal
Subscriber Identity Module (USIM) Application Toolkit
(USAT) (Release 7), 3GPP TS 31.111 V7 (2005).
[11] K. Knttel, T.Magedanz, D. Witszek: "The IMS Playground
@ Fokus an Open Testbed for Next Generation Network
Multimedia Services", 1
st
Int. IFIP Conference on Testbeds
and Research Infrastructures for the Development of
Networks and Communities (Tridentcom), Trento, Italian,
February 23 - 25, 2005, Proceedings pp. 2 11, IBSN 0-7695
-2219-x, IEEE Computer Society Press, Los Alamitos,
California.
Acronyms
3GPP
Third Generation Partnership Project
3GPP2
Third Generation Partnership Project 2
AAA
Authentication, Authorisation, and Accounting
AKA
Authentication and Key Agreement
AP Authentication
Proxy
AS
Application Server
AuC Authentication
Centre
AV Authentication
Function
BGA
Generic Bootstrapping Architecture
BSF
Bootstrapping Server Function
B-TID
Bootstrapping Transaction Identifier
CAMEL
Customized Applications for Mobile Enhanced Logic
CGI
Common Gateway Interface
CK Cipher
Key
CORBA
Common Object Request Broker Architecture
CPL
Call Programming Language
CSCFs
Call State Control Functions
DNS
Domain Name Server
FMC
Fixed Mobile Convergence
FQDN
Fully Qualified Domain Name
GAA
Generic Authentication Architecture
GPRS
General Packet Radio System
GUSS
GBA User Security Settings
HSS
Home Subscriber Server
HTTP
Hyper Text Transfer Protocol
HTTPS
HTTP Secure ( HTTP over TLS)
ICSCF
Interrogating Call State Control Function
IETF
Internet Engineering Task Force
IK Integrity
Key
IM
IP Multimedia
IMPI
IP Multimedia Private Identity
IMS
IP Multimedia Subsystem
IN
Intelligent Network
IP Internet
Protocol
ISIM
IM Service Identity Module
Ks Session
Key
ME Mobile
Equipment
MG Media
Gate
MRF
Media Resource Function
NAF
Network Authentication Function
NGN
Next Generation Network
OMA
Open Mobile Alliance
OSA
Open Service Access
PCSCF
Proxy Call State Control Function
PDP
Packet Data Protocol
PoC
PPT over Cellular
PTT
Push To Talk
QoS
Quality of Service
RES Response
RTP
Real-time Transport Protocol
SCSCF
Serving Call State Control Function
SDP
Service Delivery Platform
SER
SIP Express Router
SIP
Session Initiation Protocol
SSP
Secure Service Provisioning
TCP
Transmission Control Protocol
TISPAN
Telecoms & Internet converged Services & Protocols
for Advanced Networks
TLS
Transport Layer Security
UE User
Equipment
UICC
Universal Integrated Circuit Card
UMTS
Universal Mobile Telecommunication Standard
USIM
Universal Subscriber Identity Module
WLAN
Wireless Local Area Network
24
| TLS Tunnel end points;Generic Authentication Architecture;GLMS/XDMS;General bootstrapping architecture;Transport Layer Security;Network authentication function;Signalling protocols;Generic Bootstrapping Architecture;Authentication Proxy;GBA;Diameter proxy;Transport layer security;IP Multimedia System;Authentication proxy;TLS;IP multimedia subsystem;IMS platform;Fokus IMS Testbed;NAF;AP;Security and Privacy |
175 | Secure Hierarchical In-Network Aggregation in Sensor Networks | In-network aggregation is an essential primitive for performing queries on sensor network data. However, most aggregation algorithms assume that all intermediate nodes are trusted. In contrast, the standard threat model in sensor network security assumes that an attacker may control a fraction of the nodes, which may misbehave in an arbitrary (Byzantine) manner. We present the first algorithm for provably secure hierarchical in-network data aggregation. Our algorithm is guaranteed to detect any manipulation of the aggregate by the adversary beyond what is achievable through direct injection of data values at compromised nodes. In other words, the adversary can never gain any advantage from misrepresenting intermediate aggregation computations. Our algorithm incurs only O(log2n) node congestion, supports arbitrary tree-based aggregator topologies and retains its resistance against aggregation manipulation in the presence of arbitrary numbers of malicious nodes. The main algorithm is based on performing the SUM aggregation securely by first forcing the adversary to commit to its choice of intermediate aggregation results, and then having the sensor nodes independently verify that their contributions to the aggregate are correctly incorporated. We show how to reduce secure MEDIAN , COUNT , and AVERAGE to this primitive. | INTRODUCTION
Wireless sensor networks are increasingly deployed in security-critical
applications such as factory monitoring, environmental monitoring
, burglar alarms and fire alarms. The sensor nodes for these
applications are typically deployed in unsecured locations and are
not made tamper-proof due to cost considerations. Hence, an adversary
could undetectably take control of one or more sensor nodes
and launch active attacks to subvert correct network operations.
Such environments pose a particularly challenging set of constraints
for the protocol designer: sensor network protocols must be highly
energy efficient while being able to function securely in the presence
of possible malicious nodes within the network.
In this paper we focus on the particular problem of securely and
efficiently performing aggregate queries (such as
MEDIAN
,
SUM
and
AVERAGE
) on sensor networks. In-network data aggregation is
an efficient primitive for reducing the total message complexity of
aggregate sensor queries. For example, in-network aggregation of
the
SUM
function is performed by having each intermediate node
forward a single message containing the sum of the sensor readings
of all the nodes downstream from it, rather than forwarding each
downstream message one-by-one to the base station. The energy
savings of performing in-network aggregation have been shown to
be significant and are crucial for energy-constrained sensor networks
[9, 11, 20].
Unfortunately, most in-network aggregation schemes assume that
all sensor nodes are trusted [12, 20]. An adversary controlling just
a few aggregator nodes could potentially cause the sensor network
to return arbitrary results, thus completely subverting the function
of the network to the adversary's own purposes.
Despite the importance of the problem and a significant amount
of work on the area, the known approaches to secure aggregation
either require strong assumptions about network topology or adversary
capabilities, or are only able to provide limited probabilistic
security properties. For example, Hu and Evans [8] propose
a secure aggregation scheme under the assumption that at most a
single node is malicious. Przydatek et al. [17] propose Secure Information
Aggregation (SIA), which provides a statistical security
property under the assumption of a single-aggregator model. In the
single-aggregator model, sensor nodes send their data to a single
aggregator node, which computes the aggregate and sends it to the
base station. This form of aggregation reduces communications
only on the link between the aggregator and the base station, and is
not scalable to large multihop sensor deployments. Most of the algorithms
in SIA (in particular,
MEDIAN
,
SUM
and
AVERAGE
) cannot
be directly adapted to a hierarchical aggregation model since
278
they involve sorting all of the input values; the final aggregator in
the hierarchy thus needs to access all the data values of the sensor
nodes.
In this paper, we present the first provably secure sensor network
data aggregation protocol for general networks and multiple adver-sarial
nodes. The algorithm limits the adversary's ability to manipulate
the aggregation result with the tightest bound possible for
general algorithms with no knowledge of the distribution of sensor
data values. Specifically, an adversary can gain no additional
influence over the final result by manipulating the results of the
in-network aggregate computation as opposed to simply reporting
false data readings for the compromised nodes under its control.
Furthermore, unlike prior schemes, our algorithm is designed for
general hierarchical aggregator topologies and multiple malicious
sensor nodes. Our metric for communication cost is congestion,
which is the maximum communication load on any node in the
network. Let n be the number of nodes in the network, and
be
the maximum degree of any node in the aggregation tree. Our algorithm
induces only O
(log
2
n
) node congestion in the aggregation
tree.
RELATED WORK
Researchers have investigated resilient aggregation algorithms to
provide increased likelihood of accurate results in environments
prone to message loss or node failures. This class of algorithms
includes work by Gupta et al. [7], Nath et al. [15], Chen et al. [3]
and Manjhi et al. [14].
A number of aggregation algorithms have been proposed to ensure
secrecy of the data against intermediate aggregators. Such algorithms
have been proposed by Girao et al. [5], Castelluccia et
al. [2], and Cam et al. [1].
Hu and Evans [8] propose securing in-network aggregation against
a single Byzantine adversary by requiring aggregator nodes to forward
their inputs to their parent nodes in the aggregation tree. Jadia
and Mathuria [10] extend the Hu and Evans approach by incorporating
privacy, but also considered only a single malicious node.
Several secure aggregation algorithms have been proposed for
the single-aggregator model. Przydatek et al. [17] proposed Secure
Information Aggregation (SIA) for this topology. Also for the
single-aggregator case, Du et al. [4] propose using multiple witness
nodes as additional aggregators to verify the integrity of the
aggregator's result. Mahimkar and Rappaport [13] also propose
an aggregation-verification scheme for the single-aggregator model
using a threshold signature scheme to ensure that at least t of the
nodes agree with the aggregation result. Yang et al. [19] describe
a probabilistic aggregation algorithm which subdivides an aggregation
tree into subtrees, each of which reports their aggregates
directly to the base station. Outliers among the subtrees are then
probed for inconsistencies.
Wagner [18] addressed the issue of measuring and bounding malicious
nodes' contribution to the final aggregation result. The paper
measures how much damage an attacker can inflict by taking
control of a number of nodes and using them solely to inject erroneous
data values.
PROBLEM MODEL
In general, the goal of secure aggregation is to compute aggregate
functions (such as
SUM
,
COUNT
or
AVERAGE
) of the sensed
data values residing on sensor nodes, while assuming that a portion
of the sensor nodes are controlled by an adversary which is
attempting to skew the final result. In this section, we present the
formal parameters of the problem.
3.1
Network Assumptions
We assume a general multihop network with a set S
= {s
1
,...,s
n
}
of n sensor nodes and a single (untrusted) base station R, which is
able to communicate with the querier which resides outside of the
network. The querier knows the total number of sensor nodes n,
and that all n nodes are alive and reachable.
We assume the aggregation is performed over an aggregation
tree which is the directed tree formed by the union of all the paths
from the sensor nodes to the base station (one such tree is shown
in Figure 1(a)). These paths may be arbitrarily chosen and are not
necessarily shortest paths. The optimisation of the aggregation tree
structure is out of the scope of this paper--our algorithm takes the
structure of the aggregation tree as given. One method for constructing
an aggregation tree is described in TaG [11].
3.2
Security Infrastructure
We assume that each sensor node has a unique identifier s and
shares a unique secret symmetric key K
s
with the querier. We further
assume the existence of a broadcast authentication primitive
where any node can authenticate a message from the querier. This
broadcast authentication could, for example, be performed using
TESLA [16]. We assume the sensor nodes have the ability to perform
symmetric-key encryption and decryption as well as computations
of a collision-resistant cryptographic hash function H.
3.3
Attacker Model
We assume that the attacker is in complete control of an arbitrary
number of sensor nodes, including knowledge of all their secret
keys. The attacker has a network-wide presence and can record and
inject messages at will. The sole goal of the attacker is to launch
what Przydatek et al. [17] call a stealthy attack, i.e., to cause the
querier to accept a false aggregate that is higher or lower than the
true aggregate value.
We do not consider denial-of-service (DoS) attacks where the
goal of the adversary is to prevent the querier from getting any
aggregation result at all. While such attacks can disrupt the normal
operation of the sensor network, they are not as potentially
hazardous in security-critical applications as the ability to cause
the operator of the network to accept arbitrary data. Furthermore,
any maliciously induced extended loss of service is a detectable
anomaly which will (eventually) expose the adversary's presence
if subsequent protocols or manual intervention do not succeed in
resolving the problem.
3.4
Problem Definition and Metrics
Each sensor node s
i
has a data value a
i
. We assume that the
data value is a non-negative bounded real value a
i
[0,r] for some
maximum allowed data value r. The objective of the aggregation
process is to compute some function f over all the data values,
i.e., f
(a
1
,...,a
n
). Note that for the
SUM
aggregate, the case where
data values are in a range
[r
1
,r
2
] (where r
1
,r
2
can be negative)
is reducible to this case by setting r
= r
2
- r
1
and add nr
1
to the
aggregation result.
Definition 1 A direct data injection attack occurs when an attacker
modifies the data readings reported by the nodes under its direct
control, under the constraint that only legal readings in
[0,r] are
reported.
Wagner [18] performed a quantitative study measuring the effect
of direct data injection on various aggregates, and concludes
that the aggregates addressed in this paper (truncated
SUM
and
AV
ERAGE
,
COUNT
and
QUANTILE
) can be resilient under such attacks
.
279
Without domain knowledge about what constitutes an anomalous
sensor reading, it is impossible to detect a direct data injection
attack, since they are indistinguishable from legitimate sensor readings
[17, 19]. Hence, if a secure aggregation scheme does not make
assumptions on the distribution of data values, it cannot limit the
adversary's capability to perform direct data injection. We can thus
define an optimal level of aggregation security as follows.
Definition 2 An aggregation algorithm is optimally secure if, by
tampering with the aggregation process, an adversary is unable to
induce the querier to accept any aggregation result which is not
already achievable by direct data injection.
As a metric for communication overhead, we consider node congestion
, which is the worst case communication load on any single
sensor node during the algorithm. Congestion is a commonly
used metric in ad-hoc networks since it measures how quickly the
heaviest-loaded nodes will exhaust their batteries [6, 12]. Since the
heaviest-loaded nodes are typically the nodes which are most essential
to the connectivity of the network (e.g., the nodes closest to
the base station), their failure may cause the network to partition
even though other sensor nodes in the network may still have high
battery levels. A lower communication load on the heaviest-loaded
nodes is thus desirable even if the trade-off is a larger amount of
communication in the network as a whole.
For a lower bound on congestion, consider an unsecured aggregation
protocol where each node sends just a single message to
its parent in the aggregation tree. This is the minimum number
of messages that ensures that each sensor node contributes to the
aggregation result. There is
(1) congestion on each edge on the
aggregation tree, thus resulting in
(d) congestion on the node(s)
with highest degree d in the aggregation tree. The parameter d is
dependent on the shape of the given aggregation tree and can be as
large as
(n) for a single-aggregator topology or as small as (1)
for a balanced aggregation tree. Since we are taking the aggregation
tree topology as an input, we have no control over d. Hence,
it is often more informative to consider per-edge congestion, which
can be independent of the structure of the aggregation tree.
Consider the simplest solution where we omit aggregation altogether
and simply send all data values (encrypted and authenticated
) directly to the base station, which then forwards it to the
querier. This provides perfect data integrity, but induces O
(n) congestion
at the nodes and edges nearest the base station. For an algorithm
to be practical, it must cause only sublinear edge congestion.
Our goal is to design an optimally secure aggregation algorithm
with only sublinear edge congestion.
THE SUM ALGORITHM
In this section we describe our algorithm for the
SUM
aggregate,
where the aggregation function f is addition. Specifically, we wish
to compute a
1
+ + a
n
, where a
i
is the data value at node i. We
defer analysis of the algorithm properties to Section 5, and discuss
the application of the algorithm to other aggregates such as
COUNT
,
AVERAGE
and
MEDIAN
in Section 6.
We build on the aggregate-commit-prove framework described
by Przydatek et al. [17] but extend their single aggregator model
to a fully distributed setting. Our algorithm involves computing a
cryptographic commitment structure (similar to a hash tree) over
the data values of the sensor nodes as well as the aggregation process
. This forces the adversary to choose a fixed aggregation topology
and set of aggregation results. The individual sensor nodes
then independently audit the commitment structure to verify that
their respective contributions have been added to the aggregate. If
the adversary attempts to discard or reduce the contribution of a
legitimate sensor node, this necessarily induces an inconsistency
in the commitment structure which can be detected by the affected
node. This basic approach provides us with a lower bound for the
SUM
aggregate. To provide an upper-bound for
SUM
, we can re-use
the same lower-bounding approach, but on a complementary
aggregate called the
COMPLEMENT
aggregate. Where
SUM
is defined
as
a
i
,
COMPLEMENT
is defined as
(r - a
i
) where r is the
upper bound on allowable data values. When the final aggregates
are computed, the querier enforces the constraint that
SUM
+
COM
PLEMENT
= nr. Hence any adversary that wishes to increase
SUM
must also decrease
COMPLEMENT
, and vice-versa, otherwise the
discrepancy will be detected. Hence, by enforcing a lower-bound
on
COMPLEMENT
, we are also enforcing an upper-bound on
SUM
.
The overall algorithm has three main phases: query dissemination
, aggregation-commit, and result-checking.
Query dissemination. The base station broadcasts the query to
the network. An aggregation tree, or a directed spanning tree over
the network topology with the base station at the root, is formed as
the query is sent to all the nodes, if one is not already present in the
network.
Aggregation commit. In this phase, the sensor nodes iteratively
construct a commitment structure resembling a hash tree. First, the
leaf nodes in the aggregation tree send their data values to their parents
in the aggregation tree. Each internal sensor node in the aggregation
tree performs an aggregation operation whenever it has
heard from all its child sensor nodes. Whenever a sensor node s
performs an aggregation operation, s creates a commitment to the
set of inputs used to compute the aggregate by computing a hash
over all the inputs (including the commitments that were computed
by the children of s). Both the aggregation result and the commitment
are then passed on to the parent of s. After the final commitment
values are reported to the base station (and thus also to the
querier), the adversary cannot subsequently claim a different aggregation
structure or result. We describe an optimisation to ensure
that the constructed commitment trees are perfectly balanced, thus
requiring low congestion overhead in the next phase.
Result-checking. The result-checking phase is a novel distributed
verification process. In prior work, algorithms have relied on the
querier to issue probes into the commitment structure to verify its
integrity [17, 19]. This induces congestion nearest the base station,
and moreover, such algorithms yield at best probabilistic security
properties. We show that if the verification step is instead fully distributed
, it is possible to achieve provably optimal security while
maintaining sublinear edge congestion.
The result-checking phase proceeds as follows. Once the querier
has received the final commitment values, it disseminates them to
the rest of the network in an authenticated broadcast. At the same
time, sensor nodes disseminate information that will allow their
peers to verify that their respective data values have been incorporated
into the aggregate. Each sensor node is responsible for
checking that its own contribution was added into the aggregate.
If a sensor node determines that its data value was indeed added
towards the final sum, it sends an authentication code up the aggregation
tree towards to the base station. Authentication codes are aggregated
along the way with the XOR function for communication
efficiency. When the querier has received the XOR of all the authentication
codes, it can then verify that all the sensor nodes have
confirmed that the aggregation structure is consistent with their data
values. If so, then it accepts the aggregation result.
We now describe the details of each of the three phases in turn.
280
(a) Example network graph.
Arrows:
Aggregation
tree.
R: Base station. Q: Querier.
G
0
= 1,a
G
,r - a
G
,G
F
1
= 2,v
F
1
,v
F
1
,H[N||2||v
F
1
||v
F
1
||F
0
||G
0
]
C
1
= 4,v
C
1
,v
C
1
,H[N||4||v
C
1
||v
C
1
||C
0
||E
0
||F
1
]
A
1
= 9,v
A
1
,v
A
1
,H[N||9||v
A
1
||v
A
1
||A
0
||B
1
||C
1
||D
0
]
R
= 12,v
R
,v
R
,H[N||12||v
R
||v
R
||H
0
||A
1
||I
0
]
(b) Naive commitment tree, showing derivations of some of the vertices. For each sensor
node X , X
0
is its leaf vertex, while X
1
is the internal vertex representing the aggregate
computation at X (if any). On the right we list the labels of the vertices on the path of
node G to the root.
Figure 1: Aggregation and naive commitment tree in network context
4.1
Query Dissemination
First, an aggregation tree is established if one is not already
present. Various algorithms for selecting the structure of an aggregation
tree may be used. For completeness, we describe one
such process, while noting that our algorithm is directly applicable
to any aggregation tree structure. The Tiny Aggregation Service
(TaG) [11] uses a broadcast from the base station where each node
chooses as its parent in the aggregation tree, the node from which
it first heard the tree-formation message.
To initiate a query in the aggregation tree, the base station originates
a query request message which is distributed following the
aggregation tree. The query request message contains an attached
nonce N to prevent replay of messages belonging to a prior query,
and the entire request message is sent using an authenticated broadcast
.
4.2
Aggregation-Commit Phase
The goal of the aggregation-commit phase is to iteratively construct
a series of cryptographic commitments to data values and to
intermediate in-network aggregation operations. This commitment
is then passed on to the querier. The querier then rebroadcasts the
commitment to the sensor network using an authenticated broadcast
so that the rest of the sensor network is able to verify that their
respective data values have been incorporated into the aggregate.
4.2.1
Aggregation-Commit: Naive Approach
We first describe a naive approach that yields the desired security
properties but has suboptimal congestion overhead when sensor
nodes perform their respective verifications. In the naive approach,
when each sensor node performs an aggregation operation, it computes
a cryptographic hash of all its inputs (including its own data
value). The hash value is then passed on to the parent in the aggregation
tree along with the aggregation result. Figure 1(b) shows a
commitment tree which consists of a series of hashes of data values
and intermediate results, culminating in a set of final commitment
values which is passed on by the base station to the querier along
with the aggregation results. Conceptually, a commitment tree is
a hash tree with some additional aggregate accounting information
attached to the nodes. A definition follows. Recall that N is the
query nonce that is disseminated with each query.
Definition 3 A commitment tree is a tree where each vertex has
an associated label representing the data that is passed on to its
parent. The labels have the following format:
count, value, complement, commitment
Where count is the number of leaf vertices in the subtree rooted
at this vertex; value is the
SUM
aggregate computed over all
the leaves in the subtree; complement is the aggregate over the
COMPLEMENT
of the data values; and commitment is a cryptographic
commitment. The labels are defined inductively as follows:
There is one leaf vertex u
s
for each sensor node s, which we
call the leaf vertex of s. The label of u
s
consists of count=1,
value
=a
s
where a
s
is the data value of s, complement=r
- a
s
where r is the upper bound on allowable data values, and
commitment
is the node's unique ID.
Internal vertices represent aggregation operations, and have labels
that are defined based on their children. Suppose an internal
vertex has child vertices with the following labels: u
1
,u
2
,...,u
q
,
where u
i
= c
i
,v
i
,v
i
,h
i
. Then the vertex has label c
,v,v,h , with
c
= c
i
, v
= v
i
, v
= v
i
and h
= H[N||c||v||v||u
1
||u
2
||||u
q
].
For brevity, in the remainder of the paper we will often omit references
to labels and instead refer directly to the count, value,
complement
or commitment of a vertex.
While there exists a natural mapping between vertices in a commitment
tree and sensor nodes in the aggregation tree, a vertex is
a logical element in a graph while a sensor node is a physical device
. To prevent confusion, we will always refer to the vertices in
the commitment tree; the term nodes always refers to the physical
sensor node device.
Since we assume that our hash function provides collision resistance
, it is computationally infeasible for an adversary to change
any of the contents of the commitment tree once the final commitment
values have reached the root.
With knowledge of the root commitment value, a node s may
verify the aggregation steps between its leaf vertex u
s
and the root
of the commitment tree. To do so, s needs the labels of all its off-path
vertices.
Definition 4 The set of off-path vertices for a vertex u in a tree is
the set of all the siblings of each of the vertices on the path from u
to the root of the tree that u is in (the path is inclusive of u).
281
Figure 2: Off-path vertices for u are highlighted in bold. The
path from u to the root of its tree is shaded grey.
Figure 2 shows a pictorial depiction of the off-path vertices for a
vertex u in a tree. For a more concrete example, the set of off-path
commitment tree vertices for G
0
in Figure 1 is
{F
0
, E
0
, C
0
, B
1
,
A
0
, D
0
, H
0
, I
0
}. To allow sensor node G to verify its contribution
to the aggregate, the sensor network delivers labels of each off-path
vertex to G
0
. Sensor node G then recomputes the sequence of
computations and hashes and verifies that they lead to the correct
root commitment value.
Consider the congestion on the naive scheme. Let h be the height
of the aggregation tree and
be the maximum degree of any node
inside the tree. Each leaf vertex has O
(h) off-path vertices, and it
needs to receive all their labels to verify its contribution to the aggregate
, thus leading to O
(h) congestion at the leaves of the commitment
tree. For an aggregation tree constructed with TaG, the
height h of the aggregation tree depends on the diameter (in number
of hops) of the network, which in turn depends on the node density
and total number of nodes n in the network. In a 2-dimensional
deployment area with a constant node density, the best bound on
the diameter of the network is O
(n) if the network is regularly
shaped. In irregular topologies the diameter of the network may be
(n).
4.2.2
Aggregation-Commit: Improved Approach
We present an optimization to improve the congestion cost. The
main observation is that, since the aggregation trees are a sub-graph
of the network topology, they may be arbitrarily unbalanced.
Hence, if we decouple the structure of the commitment tree from
the structure of the aggregation tree, then the commitment tree
could be perfectly balanced.
In the naive commitment tree, each sensor node always computes
the aggregate sum of all its inputs. This can be considered
a strategy of greedy aggregation. Consider instead the benefit of
delayed aggregation at node C
1
in Figure 1(b). Suppose that C,
instead of greedily computing the aggregate sum over its own reading
(C
0
) and both its child nodes E
0
and F
1
, instead computes the
sum only over C
0
and E
0
, and passes F
1
directly to A along with
C
1
= C
0
+ E
0
. In such a commitment tree, F
1
becomes a child of
A
1
(instead of C
1
), thus reducing the depth of the commitment tree
by 1. Delayed aggregation thus trades off increased communication
during the aggregation phase in return for a more balanced
commitment tree, which results in lower verification overhead in
the result-checking phase. Greenwald and Khanna [6] used a form
of delayed aggregation in their quantile summary algorithm.
Our strategy for delayed aggregation is as follows: we perform
an aggregation operation (along with the associated commit operation
) if and only if it results in a complete, binary commitment
tree.
We now describe our delayed aggregation algorithm for producing
balanced commitment trees. In the naive commitment tree,
each sensor node passes to its parent a single message containing
the label of the root vertex of its commitment subtree T
s
. In
the delayed aggregation algorithm, each sensor node now passes
on the labels of the root vertices of a set of commitment subtrees
F
= {T
1
,...,T
q
}. We call this set a commitment forest, and we
enforce the condition that the trees in the forest must be complete
binary trees, and no two trees have the same height. These constraints
are enforced by continually combining equal-height trees
into complete binary trees of greater height.
Definition 5 A commitment forest is a set of complete binary commitment
trees such that there is at most one commitment tree of any
given height.
A commitment forest has at most n leaf vertices (one for each
sensor node included in the forest, up to a maximum of n). Since
all the trees are complete binary trees, the tallest tree in any commitment
forest has height at most log n. Since there are no two trees
of the same height, any commitment forest has at most log n trees.
In the following discussion, we will for brevity make reference
to "communicating a vertex" to another sensor node, or "communicating
a commitment forest" to another sensor node. The actual
data communicated is the label of the vertex and the labels of the
roots of the trees in the commitment forest, respectively.
The commitment forest is built as follows. Leaf sensor nodes in
the aggregation tree originate a single-vertex commitment forest,
which they then communicate to their parent sensor nodes. Each
internal sensor node s originates a similar single-vertex commitment
forest. In addition, s also receives commitment forests from
each of its children. Sensor node s keeps track of which root vertices
were received from which of its children. It then combines all
the forests to form a new forest as follows.
Suppose s wishes to combine q commitment forests F
1
,...,F
q
.
Note that since all commitment trees are complete binary trees, tree
heights can be determined by inspecting the count field of the
root vertex. We let the intermediate result be F
= F
1
F
q
, and
repeat the following until no two trees are the same height in F:
Let h be the smallest height such that more than one tree in F has
height h. Find two commitment trees T
1
and T
2
of height h in F,
and merge them into a tree of height h
+1 by creating a new vertex
that is the parent of both the roots of T
1
and T
2
according to the
inductive rule in Definition 3. Figure 3 shows an example of the
process for node A based on the topology in Figure 1.
The algorithm terminates in O
(qlogn) steps since each step reduces
the number of trees in the forest by one, and there are at most
q log n
+ 1 trees in the forest. Hence, each sensor node creates at
most q log n
+ 1 = O(logn) vertices in the commitment forest.
When F is a valid commitment forest, s sends the root vertices of
each tree in F to its parent sensor node in the aggregation tree. The
sensor node s also keeps track of every vertex that it created, as well
as all the inputs that it received (i.e., the labels of the root vertices
of the commitment forests that were sent to s by its children). This
takes O
(d logn) memory per sensor node.
Consider the communication costs of the entire process of creating
the final commitment forest. Since there are at most log n commitment
trees in each of the forests presented by any sensor node to
its parent, the per-node communication cost for constructing the final
forest is O
(logn). This is greater than the O(1) congestion cost
of constructing the naive commitment tree. However, no path in the
forest is longer than log n hops. This will eventually enable us to
prove a bound of O
(log
2
n
) edge congestion for the result-checking
phase in Section 5.2.
Once the querier has received the final commitment forest from
the base station, it checks that none of the
SUM
or
COMPLEMENT
aggregates of the roots of the trees in the forest are negative. If
282
A
0
= 1,a
A
,r - a
A
,A
D
0
= 1,a
D
,r - a
D
,D
K
0
= 1,a
K
,r - a
K
,K
C
2
= 4,v
C
2
,v
C
2
,H[N||4||v
C
2
||v
C
2
||F
1
||C
1
]
B
1
= 2,v
B
1
,v
B
1
,H[N||2||v
B
1
||v
B
1
||B
0
||J
0
]
(a) Inputs: A generates A
0
, and receives D
0
from D, C
2
from C, and
(B
1
,K
0
) from B. Each dashed-line box shows the commitment
forest received from a given sensor node. The solid-line box shows the vertex labels, each solid-line box below shows the labels of the
new vertices.
v
A
1
= a
A
+ a
D
v
A
1
= r - a
A
+ r - a
D
A
1
= 2,v
A
1
,v
A
1
,H[N||2||v
A
1
||v
A
1
||A
0
||D
0
]
(b) First merge: Vertex A
1
created
v
A
2
= v
A
1
+ v
B
1
v
A
2
= v
A
1
+ v
B
1
A
2
= 4,v
A
2
,v
A
2
,H[N||4||v
A
2
||v
A
2
||A
1
||B
1
]
(c) Second merge: Vertex A
2
created
v
A
3
= v
A
2
+ v
C
2
v
A
3
= v
A
2
+ v
C
2
A
3
= 8,v
A
3
,v
A
3
,H[N||8||v
A
3
||v
A
3
||A
2
||C
2
]
(d) Final merge: Vertex A
3
created. A
3
and K
0
are sent to the parent of A in the aggregation tree.
Figure 3: Process of node A (from Figure 1) deriving its commitment forest from the commitment forests received from its children.
any aggregates are negative, the querier rejects the result and raises
an alarm: a negative aggregate is a sure sign of tampering since
all the data values (and their complements) are non-negative. Otherwise
, the querier then computes the final pair of aggregates
SUM
and
COMPLEMENT
. The querier verifies that
SUM
+
COMPLEMENT
= nr where r is the upper bound on the range of allowable data values
on each node. If this verifies correctly, the querier then initiates
the result-checking phase.
4.3
Result-checking phase
The purpose of the result-checking phase is to enable each sensor
node s to independently verify that its data value a
s
was added into
the
SUM
aggregate, and the complement
(r - a
s
) of its data value
was added into the
COMPLEMENT
aggregate. The verification is
performed by inspecting the inputs and aggregation operations in
the commitment forest on the path from the leaf vertex of s to the
root of its tree; if all the operations are consistent, then the root
aggregate value must have increased by a
s
due to the incorporation
of the data value. If each legitimate node performs this verification,
then it ensures that the
SUM
aggregate is at least the sum of all the
data values of the legitimate nodes. Similarly, the
COMPLEMENT
aggregate is at least the sum of all the complements of the data
values of the legitimate nodes. Since the querier enforces
SUM
+
COMPLEMENT
= nr, these two inequalities form lower and upper
bounds on an adversary's ability to manipulate the final result. In
Section 5 we shall show that they are in fact the tightest bounds
possible.
A high level overview of the process is as follows. First, the
aggregation results from the aggregation-commit phase are sent using
authenticated broadcast to every sensor node in the network.
Each sensor node then individually verifies that its contributions
to the respective
SUM
and
COMPLEMENT
aggregates were indeed
counted. If so, it sends an authentication code to the base station.
The authentication code is also aggregated for communication effi-283
Figure 4: Dissemination of off-path values: t sends the label of
u
1
to u
2
and vice-versa; each node then forwards it to all the
vertices in their subtrees.
ciency. When the querier has received all the authentication codes,
it is then able to verify that all sensor nodes have checked that their
contribution to the aggregate has been correctly counted.
For simplicity, we describe each step of the process with reference
to the commitment tree visualised as an overlay network over
the actual aggregation tree. Hence, we will refer to vertices in the
commitment tree sending information to each other; in the physical
world, it is the sensor node that created the vertex is the physical
entity that is responsible for performing communications and computations
on behalf of the vertex. Each edge in the commitment
tree may involve multiple hops in the aggregation tree; the routing
on the aggregation tree is straightforward.
Dissemination of final commitment values. After the querier
has received the labels of the roots of the final commitment forest,
the querier sends each of these labels to the entire sensor network
using authenticated broadcast.
Dissemination of off-path values. To enable verification, each
leaf vertex must receive all its off-path values. Each internal vertex
t in the commitment forest has two children u
1
and u
2
. To disseminate
off-path values, t sends the label of u
1
to u
2
, and vice-versa (t
also attaches relevant information tagging u
1
as the right child and
u
2
as the left child). Vertex t also sends any labels (and left/right
tags) received from its parent to both its children. See Figure 4 for
an illustration of the process. The correctness of this algorithm in
delivering all the necessary off-path vertex labels to each vertex is
proven in Theorem 14 in Section 5.2. Once a vertex has received all
the labels of its off-path vertices, it can proceed to the verification
step.
Verification of inclusion. When the leaf vertex u
s
of a sensor
node s has received all the labels of its off-path vertices, it may
then verify that no aggregation result-tampering has occurred on
the path between u
s
and the root of its commitment tree. For each
vertex t on the path from u
s
to the root of its commitment tree, u
s
derives the label of t (via the computations in Definition 3). It is
able to do so since the off-path labels provide all the necessary data
to perform the label computation. During the computation, u
s
inspects
the off-path labels: for each node t on the path from u
s
to the
root, u
s
checks that the input values fed into the aggregation operation
at t are never negative. Negative values should never occur
since the data and complement values are non-negative; hence if
a negative input is encountered, the verification fails. Once u
s
has
derived the label of the root of its commitment tree, it compares
the derived label against the label with the same count that was
disseminated by the querier. If the labels are identical, then u
s
proceeds
to the next step. Otherwise, the verification fails and u
s
may
either immediately raise an alarm (for example, using broadcast),
or it may simply do nothing and allow the aggregate algorithm to
fail due to the absence of its confirmation message in the subsequent
steps.
Collection of confirmations. After each sensor node s has suc-cessfully
performed the verification step for its leaf vertex u
s
, it
sends an authentication code to the querier. The authentication code
for sensor node s is MAC
K
s
(N||OK) where OK is a unique message
identifier and K
s
is the key that s shares with the querier. The collation
of the authentication codes proceeds as follows (note that we
are referring to the aggregation tree at this point, not the commitment
tree). Leaf sensor nodes in the aggregation tree first send their
authentication codes to their parents in the aggregation tree. Once
an internal sensor node has received authentication codes from all
its children, it computes the XOR of its own authentication code
with all the received codes, and forwards it to its parent. At the end
of the process, the querier will receive a single authentication code
from the base station that consists of the XOR of all the authentication
codes received in the network.
Verification of confirmations. Since the querier knows the key
K
s
for each sensor node s, it verifies that every sensor node has
released its authentication code by computing the XOR of the authentication
codes for all the sensor nodes in the network, i.e.,
MAC
K
1
(N||OK) MAC
K
n
(N||OK). The querier then compares
the computed code with the received code. If the two codes
match, then the querier accepts the aggregation result. Otherwise,
the querier rejects the result. A rejection may indicate the presence
of the adversary in some unknown nodes in the network, or it may
be due to natural factors such as node death or message loss. The
querier may either retry the query or attempt to determine the cause
of the rejection. For example, it could directly request the leaf values
of every sensor node: if rejections due to natural causes are
sufficiently rare, the high cost of this direct query is incurred infre-quently
and can be amortised over the other successful queries.
ANALYSIS OF SUM
In this section we prove the properties of the
SUM
algorithm. In
Section 5.1 we prove the security properties of the algorithm, and
in Section 5.2 we prove bounds on the congestion of the algorithm.
5.1
Security Properties
We assume that the adversary is able to freely choose any arbitrary
topology and set of labels for the final commitment forest.
We then show that any such forest which passes all the verification
tests must report an aggregate result that is (optimally) close to the
actual result. First, we define the notion of an inconsistency, or
evidence of tampering, at a given node in the commitment forest.
Definition 6 Let t
= c
t
,v
t
,v
t
,H
t
be an internal vertex in a commitment
forest. Let its two children be u
1
= c
1
,v
1
,v
1
,H
1
and
u
2
= c
2
,v
2
,v
2
,H
2
. There is an inconsistency at vertex t in a commitment
tree if either (1) v
t
= v
1
+ v
2
or v
t
= v
1
+ v
2
or (2) any of
{v
1
,v
2
,v
1
,v
2
} is negative.
Informally, an inconsistency occurs at t if the sums don't add up
at t, or if any of the inputs to t are negative. Intuitively, if there
are no inconsistencies on a path from a vertex to the root of the
commitment tree, then the aggregate value along the path should
be non-decreasing towards the root.
Definition 7 Call a leaf-vertex u accounted-for if there is no inconsistency
at any vertex on the path from the leaf-vertex u to the
root of its commitment tree, including at the root vertex.
Lemma 8 Suppose there is a set of accounted-for leaf-vertices with
distinct labels u
1
,...,u
m
and committed data values v
1
,...,v
m
in
284
the commitment forest. Then the total of the aggregation values at
the roots of the commitment trees in the forest is at least
m
i
=1
v
i
.
Lemma 8 can be rigorously proven using induction on the height
of the subtrees in the forest (see Appendix A). Here we present a
more intuitive argument.
P
ROOF
. (Sketch) We show the result for m
= 2; a similar reasoning
applies for arbitrary m. Case 1: Suppose u
1
and u
2
are in
different trees. Then, since there is no inconsistency on any vertex
on the path from u
1
to the root of its tree, the root of the tree
containing u
1
must have an aggregation value of at least v
1
. By a
similar reasoning, the root of the tree containing u
2
must have an
aggregation value of at least v
2
. Hence the total aggregation value
of the two trees containing u
1
and u
2
is at least v
1
+ v
2
.
Case 2: Now suppose u
1
and u
2
are in the same tree. Since they
have distinct labels, they must be distinct vertices, and they must
have a lowest common ancestor t in the commitment tree. The vertices
between u
1
and t (including u
1
) must have aggregation value
at least v
1
since there are no inconsistencies on the path from u
1
to t, so the aggregation value could not have decreased. Similarly,
the vertices between u
2
and t (including u
2
) must have aggregation
value at least v
2
. Hence, one of the children of t has aggregation
value at least v
1
and the other has aggregation value at least v
2
.
Since there was no inconsistency at t, vertex t must have aggregation
value at least v
1
+v
2
. Since there are no inconsistencies on the
path from t to the root of the commitment tree, the root also must
have aggregation value at least v
1
+ v
2
.
Negative root aggregate values are detected by the querier at the
end of the aggregate-commit phase, so the total sum of the aggregate
values of the roots of all the trees is thus at least v
1
+ v
2
.
The following is a restatement of Lemma 8 for the
COMPLE
MENTARY
SUM
aggregate; its proof follows an identical structure
and is thus omitted.
Lemma 9 Suppose there is a set of accounted-for leaf vertices
with distinct labels u
1
,...,u
m
with committed complement values
v
1
,...,v
m
in the commitment forest. Then the total
COMPLEMENT
aggregation value of the roots of the commitment trees in the forest
is at least
m
i
=1
v
i
.
Lemma 10 A legitimate sensor node will only release its confirmation
MAC if it is accounted-for.
P
ROOF
. By construction, each sensor node s only releases its
confirmation MAC if (1) s receives an authenticated message from
the querier containing the query nonce N and the root labels of all
the trees in the final commitment forest and (2) s receives all labels
of its off-path vertices (the sibling vertices to the vertices on the
path from the leaf vertex corresponding to s to the root of the commitment
tree containing the leaf vertex in the commitment forest),
and (3) s is able to recompute the root commitment value that it
received from the base station and correctly authenticated, and (4)
s verified that all the computations on the path from its leaf vertex
u
s
to the root of its commitment tree are correct, i.e., there are no
inconsistencies on the path from u
s
to the root of the commitment
tree containing u
s
. Since the hash function is collision-resistant,
it is computationally infeasible for an adversary to provide s with
false labels that also happen to compute to the correct root commitment
value. Hence, it must be that s was accounted-for in the
commitment forest.
Lemma 11 The querier can only receive the correct final XOR
check value if all the legitimate sensor nodes replied with their confirmation
MACs.
P
ROOF
. To compute the correct final XOR check value, the adversary
needs to know the XOR of all the legitimate sensor nodes
that did not release their MAC. Since we assume that each of the
distinct MACs are unforgeable (and not correlated with each other),
the adversary has no information about this XOR value. Hence, the
only way to produce the correct XOR check value is for all the
legitimate sensor nodes to have released their relevant MACs.
Theorem 12 Let the final
SUM
aggregate received by the querier
be S. If the querier accepts S, then S
L
S (S
L
+ r) where S
L
is
the sum of the data values of all the legitimate nodes, is the total
number of malicious nodes, and r is the upper bound on the range
of allowable values on each node.
P
ROOF
. Suppose the querier accepts the
SUM
result S. Let the
COMPLEMENT SUM
received by the querier be S. The querier accepts
S if and only if it receives the correct final XOR check value
in the result-checking phase, and S
+ S = nr. Since the querier received
the correct XOR check value, we know that each legitimate
sensor node must have released its confirmation MAC (Lemma 11),
and so the leaf vertices of each legitimate sensor node must be
accounted-for (Lemma 10). The set of labels of the leaf vertices of
the legitimate nodes is distinct since the labels contain the (unique)
node ID of each legitimate node. Since all the leaf vertices of the legitimate
sensor nodes are distinct and accounted-for, by Theorem 8,
S
S
L
where S
L
is the sum of the data values of all the legitimate
nodes. Furthermore, by Theorem 9, S
S
L
, where S
L
is the sum
of the complements of the data values of all the legitimate nodes.
Let L be the set of legitimate sensor nodes, with
|L| = l. Observe
that S
L
=
i
L
r
- a
i
= lr - S
L
= (n - )r - S
L
= nr - (S
L
+ r).
We have that S
+ S = nr and S nr - (S
L
+ r). Substituting,
S
= nr - S S
L
+ r. Hence, S
L
S (S
L
+ r).
Note that nowhere was it assumed that the malicious nodes were
constrained to reporting data values between
[0,r]: in fact it is possible
to have malicious nodes with data values above r or below 0
without risking detection if S
L
S (S
L
+ r).
Theorem 13 The
SUM
algorithm is optimally secure.
P
ROOF
. Let the sum of the data values of all the legitimate nodes
be S
L
. Consider an adversary with malicious nodes which only
performs direct data injection attacks. Recall that in a direct data injection
attack, an adversary only causes the nodes under its control
to each report a data value within the legal range
[0,r]. The lowest
result the adversary can induce is by setting all its malicious nodes
to have data value 0; in this case the computed aggregate is S
L
. The
highest result the adversary can induce is by setting all nodes under
its control to yield the highest value r. In this case the computed
aggregate is S
L
+ r. Clearly any aggregation value between these
two extremes is also achievable by direct data injection. The bound
proven in Theorem 12 falls exactly on the range of possible results
achievable by direct data injection, hence the algorithm is optimal
by Definition 2.
The optimal security property holds regardless of the number or
fraction of malicious nodes; this is significant since the security
property holds in general, and not just for a subclass of attacker
multiplicities. For example, we do not assume that the attacker is
limited to some
fraction of the nodes in the network.
5.2
Congestion Complexity
We now consider the congestion induced by the secure
SUM
algorithm
. Recall that node congestion is defined as the communication
load on the most heavily loaded sensor node in the network,
285
and edge congestion is the heaviest communication load on a given
link in the network. We only need to consider the case where the
adversary is not performing an attack. If the adversary attempts
to send more messages than the proven congestion bound, legitimate
nodes can easily detect this locally and either raise an alarm
or refuse to respond with their confirmation values, thus exposing
the presence of the adversary. Recall that when we refer to a
vertex sending and receiving information, we are referring to the
commitment tree overlay network that lies over the actual physical
aggregation tree.
Theorem 14 Each vertex u receives the labels of its off-path vertices
and no others.
P
ROOF
. Since, when the vertices are disseminating their labels
in the result-checking phase, every vertex always forwards any labels
received from its parents to both its children, it is clear that
when a label is forwarded to a vertex u , it is eventually forwarded
to the entire subtree rooted at u .
By definition, every off-path vertex u
1
of u has a parent p which
is a node on the path between u and the root of its commitment
tree. By construction, p sends the label of u
1
to its sibling u
2
which
is on the path to u (i.e., either u
2
is an ancestor of u, or u
2
= u).
Hence, the label u
1
is eventually forwarded to u. Every vertex u
1
that is not an off-path vertex has a sibling u
2
which is not on the
path between u and the root of its commitment tree. Hence, u is not
in the subtree rooted at u
2
. Since the label of u
1
is only forwarded
to the subtree rooted at its sibling and nowhere else, the label of u
1
never reaches u.
Theorem 15 The
SUM
algorithm induces O
(log
2
n
) edge congestion
(and hence O
(log
2
n
) node congestion) in the aggregation
tree.
P
ROOF
. Every step in the algorithm except the label dissemination
step involves either broadcast or convergecast of messages
that are at most O
(logn) size. The label-dissemination step is the
dominating factor.
Consider an arbitrary edge in the commitment-tree between parent
vertex x and child vertex y. In the label dissemination step,
messages are only sent from parent to child in the commitment tree.
Hence the edge xy carries exactly the labels that y receives. From
Theorem 14, y receives O
(logn) labels, hence the total number of
labels passing through xy is O
(logn). Hence, the edge congestion
in the commitment tree is O
(logn). Now consider an arbitrary aggregation
tree edge with parent node u and child node v. The child
node v presents (i.e., sends) at most log n commitment-tree vertices
to its parent u, and hence the edge uv is responsible for carrying
traffic on behalf of at most log n commitment-tree edges -- these
are the edges incident on the commitment tree vertices that v presented
to u. Note that v may not be responsible for creating all
the vertices that it presents to u, but v is nonetheless responsible
for forwarding the messages down to the sensor nodes which created
those vertices. Since each edge in the commitment tree has
O
(logn) congestion, and each edge in the aggregation tree carries
traffic for at most log n commitment-tree edges, the edge congestion
in the aggregation tree is O
(log
2
n
). The node-congestion bound
of O
(log
2
n
) follows from the O(log
2
n
) edge congestion and the
definition of
as the greatest degree in the aggregation tree.
OTHER AGGREGATION FUNCTIONS
In this section we briefly discuss how to use the
SUM
algorithm
as a primitive for the
COUNT
,
AVERAGE
and
QUANTILE
aggregates
.
The C
OUNT
Aggregate. The query
COUNT
is generally used
to determine the total number of nodes in the network with some
property; without loss of generality it can be considered a
SUM
aggregation
where all the nodes have value either 1 (the node has the
property) or 0 (otherwise). More formally, each sensor node s has
a data value a
s
{0,1}, and we wish to compute f (a
1
,...,a
n
) =
a
1
+a
2
++a
n
. Since count is a special case of
SUM
, we can use
the basic algorithm for
SUM
without modification.
The A
VERAGE
Aggregate. The
AVERAGE
aggregate can be
computed by first computing the
SUM
of data values over the nodes
of interest, and then the
COUNT
of the number of nodes of interest,
and then dividing the
SUM
by the
COUNT
.
The
-Q
UANTILE
Aggregate. In the
QUANTILE
aggregate,
we wish to find the value that is in the
n-th position in the sorted
list of data values. For example, the median is a special case where
= 0.5. Without loss of generality we can assume that all the data
values are distinct; ties can be broken using unique node IDs.
If we wished to verify the correctness of a proposed
-quantile q,
we can perform a
COUNT
computation where each node s presents a
value a
s
= 1 if its data value a
s
q and presents a
s
= 0 otherwise. If
q is the
-quantile, then the computed sum should be equal to n.
Hence, we can use any insecure approximate
-quantile aggregation
scheme to compute a proposed
-quantile, and then securely
test to see if the result truly is within the approximation bounds of
the
-quantile algorithm.
CONCLUSION
In-network data aggregation is an important primitive for sensor
network operation. The strong standard threat model of multiple
Byzantine nodes in sensor networks requires the use of aggregation
techniques that are robust against malicious result-tampering
by covert adversaries.
We present the first optimally secure aggregation scheme for arbitrary
aggregator topologies and multiple malicious nodes. This
contribution significantly improves on prior work which requires
strict limitations on aggregator topology or malicious node multiplicity
, or which only yields a probabilistic security bound. Our algorithm
is based on a novel method of distributing the verification
of aggregation results onto the sensor nodes, and combining this
with a unique technique for balancing commitment trees to achieve
sublinear congestion bounds. The algorithm induces O
(log
2
n
)
node congestion (where
is the maximum degree in the aggregation
tree) and provides the strongest security bound that can be
proven for any secure aggregation scheme without making assumptions
about the distribution of data values.
REFERENCES
[1] H. Cam, S. Ozdemir, P. Nair, D. Muthuavinashiappan, and
H. O. Sanli. Energy-efficient secure pattern based data
aggregation for wireless sensor networks. Computer
Communications, 29:446455, 2006.
[2] C. Castelluccia, E. Mykletun, and G. Tsudik. Efficient
aggregation of encrypted data in wireless sensor networks. In
Proceedings of The Second Annual International Conference
on Mobile and Ubiquitous Systems, 2005.
[3] J.-Y. Chen, G. Pandurangan, and D. Xu. Robust computation
of aggregates in wireless sensor networks: distributed
randomized algorithms and analysis. In Proceedings of the
Fourth International Symposium on Information Processing
in Sensor Networks, 2005.
[4] W. Du, J. Deng, Y. Han, and P. K. Varshney. A witness-based
approach for data fusion assurance in wireless sensor
286
networks. In Proceedings of the IEEE Global
Telecommunications Conference, 2003.
[5] J. Girao, M. Schneider, and D. Westhoff. CDA: Concealed
data aggregation in wireless sensor networks. In Proceedings
of the ACM Workshop on Wireless Security, 2004.
[6] M. B. Greenwald and S. Khanna. Power-conserving
computation of order-statistics over sensor networks. In
Proceedings of the twenty-third ACM
SIGMOD-SIGACT-SIGART symposium on Principles of
database systems, 2004.
[7] I. Gupta, R. van Renesse, and K. P. Birman. Scalable
fault-tolerant aggregation in large process groups. In
Proceedings of the International Conference on Dependable
Systems and Networks, 2001.
[8] L. Hu and D. Evans. Secure aggregation for wireless
networks. In Workshop on Security and Assurance in Ad hoc
Networks, 2003.
[9] C. Intanagonwiwat, D. Estrin, R. Govindan, and
J. Heidemann. Impact of network density on data
aggregation in wireless sensor networks. In Proceedings of
the 22nd International Conference on Distributed Computing
Systems, 2002.
[10] P. Jadia and A. Mathuria. Efficient secure aggregation in
sensor networks. In Proceedings of the 11th International
Conference on High Performance Computing, 2004.
[11] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong.
TAG: a tiny aggregation service for ad-hoc sensor networks.
SIGOPS Oper. Syst. Rev., 36(SI):131146, 2002.
[12] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong.
The design of an acquisitional query processor for sensor
networks. In Proceedings of the 2003 ACM International
Conference on Management of Data, 2003.
[13] A. Mahimkar and T. Rappaport. SecureDAV: A secure data
aggregation and verification protocol for sensor networks. In
Proceedings of the IEEE Global Telecommunications
Conference, 2004.
[14] A. Manjhi, S. Nath, and P. B. Gibbons. Tributaries and
deltas: efficient and robust aggregation in sensor network
streams. In Proceedings of the ACM International
Conference on Management of Data, 2005.
[15] S. Nath, P. B. Gibbons, S. Seshan, and Z. R. Anderson.
Synopsis diffusion for robust aggregation in sensor networks.
In Proceedings of the 2nd International Conference on
Embedded Networked Sensor Systems, 2004.
[16] A. Perrig, R. Szewczyk, J. D. Tygar, V. Wen, and D. E.
Culler. SPINS: Security protocols for sensor networks. Wirel.
Netw., 8(5):521534, 2002.
[17] B. Przydatek, D. Song, and A. Perrig. SIA: Secure
information aggregation in sensor networks. In Proceedings
of the 1st International Conference on Embedded Networked
Sensor Systems, 2003.
[18] D. Wagner. Resilient aggregation in sensor networks. In
Proceedings of the 2nd ACM Workshop on Security of
Ad-hoc and Sensor Networks, 2004.
[19] Y. Yang, X. Wang, S. Zhu, and G. Cao. SDAP: A secure
hop-by-hop data aggregation protocol for sensor networks.
In Proceedings of the ACM International Symposium on
Mobile Ad Hoc Networking and Computing, 2006.
[20] Y. Yao and J. Gehrke. The COUGAR approach to in-network
query processing in sensor networks. SIGMOD Rec.,
31(3):918, 2002.
APPENDIX
A.
PROOF OF LEMMA 8
We first prove the following:
Lemma 16 Let F be a collection of commitment trees of height at
most h. Suppose there is a set U of accounted-for leaf-vertices with
distinct labels u
1
,...,u
m
and committed values v
1
,...,v
m
in F. Let
the set of trees that contain at least one member of U be T
F
. Define
val
(X) for any forest X to be the total of the aggregation values at
the roots of the trees in X . Then val
(T
F
)
m
i
=1
v
i
.
P
ROOF
. Proof: By induction on h.
Base case: h
= 0. Then all the trees are singleton-trees. The total
aggregation value of all the singleton-trees that contain at least one
member of U is exactly
m
i
=1
v
i
.
Induction step: Assume the theorem holds for h, and consider
an arbitrary collection F of commitment trees with at most height
h
+ 1 where the premise holds. If there are no trees of height h + 1
then we are done. Otherwise, let the set R be all the root vertices
of the trees of height h
+ 1. Consider F = F\R, i.e., remove all
the vertices in R from F. The result is a collection of trees with
height at most h. Let T
F
be the set of trees in F containing at
least one member of U . The induction hypothesis holds for F , so
val
(T
F
)
m
i
=1
v
i
. We now show that replacing the vertices from
R cannot produce an T
F
such that val
(T
F
) < val(T
F
). Each vertex
r from R is the root of two subtrees of height h in F. We have three
cases:
Case 1: Neither subtree contains any members of U . Then the
new tree contains no members of U , and so is not a member of T
F
.
Case 2: One subtree t
1
contains members of U . Since all the
members of U are accounted-for, this implies that there is no inconsistency
at r. Hence, the subtree without a member of U must
have a non-negative aggregate value. We know that r performs the
aggregate sum correctly over its inputs, so it must have aggregate
value at least equal to the aggregate value of t
1
.
Case 3: Both subtrees contain members of U . Since all the members
of U are accounted-for, this implies that there is no inconsistency
at r. The aggregate result of r is exactly the sum of the aggregate
values of the two subtrees.
In case 2 and 3, the aggregate values of the roots of the trees of
height h
+1 that were in T
F
, was no less than the sum of the aggregate
values of their constituent subtrees in T
F
. Hence, val
(T
F
)
val
(T
F
)
m
i
=1
v
i
.
Let the commitment forest in Lemma 8 be F. Let the set of trees
in F that contain at least one of the accounted-for leaf-vertices be
T . By the above lemma, val
(T)
m
i
=1
v
i
. We know that there are
no root labels with negative aggregation values in the commitment
forest, otherwise the querier would have rejected the result. Hence,
val
(F) val(T)
m
i
=1
v
i
.
287 | algorithm;Secure aggregation;commitment forest;in-network data aggregation;commitment tree;Sensor Networks;secure hierarchical data aggregation protocol;sensor network;aggregation commit;result checking;query dissemination;congestion complexity;Data aggregation |
176 | SensorBus: A Middleware Model for Wireless Sensor Networks | The use of middleware eases the development of distributed applications by abstracting the intricacies (communication and coordination among software components) of the distributed network environment. In wireless sensor networks, this is even trickier because of their specific issues such as addressing, mobility, number of sensors and energy-limited nodes. This paper describes SensorBus, a message-oriented middleware (MOM) model for wireless sensor networks based on the publish-subscribe paradigm and that allows the free exchange of the communication mechanism among sensor nodes allowing as result the capability of using more than one communication mechanism to address the requirements of larger number of applications. We intend to provide a platform which addresses the main characteristics of wireless sensor networks and also allows the development of energy-efficient applications. SensorBus incorporates constraint and query languages which will aid the development of interactive applications. It intends with the utilization of filters reduces data movement minimizing the energy consumption of nodes. | INTRODUCTION
Recent advances in wireless networking technology, low-power
digital circuits, sensing materials and Micro Electro-Mechanical
Systems (MEMS) opened up the possibility of building small
sensor devices capable of data processing, remote sensing and
wireless communication. When several small sensors are scattered
and linked over an area we may call this arrangement a "Sensor
Network". These networks can be used for collecting and
analyzing data from the physical environment. More specifically,
sensor networks are comprised of hundreds or even thousands of
heterogeneous sensor nodes exchanging information to perform
distributed sensing and collaborative data processing [1].
From a functional perspective sensor networks behave like
distributed systems with many different types of sensor nodes.
Given the diversity of node functionality and the size of these
networks it is important for a user to be able to program and
manage the distributed applications that perform the information
gathering. A programmer may develop these applications using
operating system primitives. This kind of procedure, however,
brings another level of complexity to the programmer, in which
he not only has to deal with low-level primitives but he will also
have to treat issues concerning communication and coordination
among software components distributed over the network. A
much friendlier approach is the utilization of a middleware in
order to provide higher-level primitives to hide issues concerning
the distributed environment.
Traditional middleware is not suited to this task because of the
characteristics of wireless networks. For example, conventional
middleware designed for wired networks raises exceptions when
they do not find a specific component, but this situation is much
more like the standard than the exception in wireless
environments. The lower bandwidth available for wireless
networks requires optimizing the transport of data and this is not
considered in conventional middleware. The coordination
primitives of these middleware products do not take into account
the frequent disconnections that happen in wireless networks.
Another problem is the size and computing requirements of these
middleware products; they are often too large and too heavy to be
running in a device with so few resources. Finally, the
transparency level provided is not sufficient enough because the
application running in such devices needs information about the
execution context to better adapt itself.
A series of new middleware environments were proposed to deal
with the requirements imposed by the wireless environment [2].
Middleware products based on computing reflection are designed
to be light (concerning the computing power required to run) and
easily configurable. Middleware based on tuple space were
proposed to address the problem of frequent disconnections, and
present a more natural way to deal with asynchronous
communication. Context-aware middleware includes the ability of
an application to access its information context (context-awareness
). These proposals addressed adequately the issues
brought by the mobile networks, but are not well suited to support
the specific requirements of the target applications used or to be
used in wireless sensor networks because they are designed to
support traditional client-server applications used in regular
(wired) environments.
Wireless sensor networks are very similar to conventional
wireless networks; including energy-limited nodes, low
bandwidth and communication channels more prone to errors.
However, communication in wireless sensor nets differs from the
end-to-end connections often necessary in usual networks [1]. In
other words, the function of the network is to report information
considering the phenomenon to the observer who is not
necessarily aware that the sensor infrastructure is being used as a
means of communication. In addition, energy is much more
limited in sensor networks than in other types of wireless nets due
to the nature of the sensor devices and the difficulty of reloading
batteries in hostile regions. Some works have shown that the
execution of 3000 instructions costs the same amount of energy
necessary to send 1-bit of data over 100 meters via radio [3].
Those studies indicate that we must prioritize computing over
communications.
The communication issues are addressed in the several routing
protocols proposed for wireless sensor nets. The communication
model allows other ways of addressing the sensor nodes besides
single addressing. The sensor nodes can be addressed by their
own attributes or by attributes extracted from the physical
environment (attribute-based naming). The sharp limitation of
energy demands that sensor nodes actively take part in the
processing and dissemination of information in order to save as
much energy as possible. Although the majority of the protocols
reviewed are efficient in saving energy, they differ in addressing
capabilities. Some of them utilize single addressing [4] while
others utilize attribute-based naming [5]. Thus, each type of
application requires an adaptation of the communication
mechanism to address specific application issues.
Trying to overcome these problems this paper proposes
SensorBus, a message oriented middleware for sensor networks
allowing the free exchanging of the communication mechanism
among sensor nodes. We propose a platform that provides
facilities for the development of energy-efficient applications and
that also addresses the key characteristics of sensor networks.
This type of middleware should be suited to perform
environmental monitoring where single addressing is demanded
(small areas) as well as where attribute-based naming is necessary
(large areas).
The remainder of this paper is organized as follows: Section 2
describes the type of sensor networks considered in this research;
Section 3 presents the target application and explains its
requirements; Section 4 broaches the abstractions and
mechanisms needed to address the requirements listed on the
previous section; Section 5 describes the components of the
SensorBus architecture; Section 6 presents the communication
architecture and explains the steps needed to develop an
application using SensorBus; Section 7 broaches implementing
and coding issue; Section 8 presents the related works, and finally
Section 9 concludes the paper.
ASSUMPTIONS
Most of the algorithms catalogued in the sensor networks
literature are hypothetical [1], i.e., they were proposed as an
experiment and were not tested in real networks (although many
of them were deployed in testbed environments). This research is
no different. When we speak about wireless sensor networks, we
are referring to the projected and experimental designs and
deployments discussed in the literature and not to actual instances
of wireless sensor networks deployed in the field.
Differently from the real settings, the testbed environments are
built and organized focusing on the network features one wants to
observe and test. This organization involves three main
components: infrastructure, network protocols and applications
[6]. The infrastructure is formed of sensor nodes and their
topology the way they were scattered over a determined region.
The network protocol is responsible for the creation and
maintenance of communication links between sensor nodes and
the applications. The applications extract information about a
determined phenomenon through the sensor nodes. The following
topics introduce in more details the assumptions made considering
those aspects.
2.1
Applications
The way in which the applications gather data from the sensor
nodes depends on the network design. In the literature, we found
that there are four data transfer modes between sensor nodes and
applications: continuous, event-oriented, query-oriented and
hybrid [6]. In the continuous model, sensor nodes transfer their
data continuously at a predefined rate. In the event-oriented
model, nodes transfer data only when an event of interest occurs.
In the query-oriented model the application is responsible for
deciding which events are of interest, thus requesting data about a
phenomenon. Lastly, in the hybrid model, the three approaches
may coexist together. In this research, we adopt a hybrid approach
in the way that it utilizes the query and the event-oriented model
as will be shown in the target application presented in Section 3.
2.2
Network Protocol
The performance of the network protocol is influenced by the
communications model adopted, the packet data transfer mode
and the network mobility. In order to evaluate how a network
protocol behaves it is important to take into account these aspects.
Communication in sensor networks is classified in two major
categories [6]: application and infrastructure. Application
communication consists of the transfer of data obtained by the
sensor nodes to the observer. This kind of communication is of
3
two types: cooperative and non-cooperative. In cooperative mode
sensor nodes exchange data among themselves before
transmitting the data gathered. In non-cooperative mode,
however, sensor nodes do not exchange any kind of information;
each one is solely responsible for reporting its collected data. The
infrastructure data refers to the information needed to set,
maintain and optimize the running network. As the network
protocol must support both categories, the SensorBus architecture
will not address those issues.
The packet data transfer is a routing issue concerning the network
protocol. This routing is divided into three types: flooding,
unicast and multicast [6]. In the flooding approach, the sensor
node broadcasts its information to neighboring nodes that, in turn,
broadcast this information to their neighboring nodes until the
information reaches the destination node. Alternatively, the sensor
node may transmit its data directly to the observer using unicast
multi-hop routing and also might use a cluster-head through one-to
-one unicast. Lastly, the multicast approach casts information to
predefined groups of nodes. The routing protocol is responsible
for treating packet data transfer relieving SensorBus of these
issues.
Regarding mobility, sensor networks are divided into static and
dynamic [6]. In static nets there is no movement by sensor nodes,
the observers or the phenomenon to be studied. Conversely, in
dynamic networks the nodes, observers and the phenomenon
might well change their locations. This kind of network is further
classified by the mobility of its components in dynamic nets with
mobile observer, dynamic nets with mobile sensors and dynamic
nets with mobile phenomena respectively. In the first, the
observer is mobile in relation to the sensors and phenomena; in
the second; the sensors are moving with respect to each other and
the observer; and in the later; the phenomenon itself is in motion.
The routing protocol is also responsible for treating mobility
issues, relieving SensorBus of these concerns.
2.3
Infrastructure
As for the infrastructure, the issues to take into consideration are
location, access point and sensor node's computing power. The
nodes have well-know locations and are to be scattered over a
well-defined area. We will assume that all information is
transmitted and received by means of a unique access point called
the sink node. Despite the fact that, for this model, we will
consider all nodes as being the same, there is nothing to prevent
one node from having more memory, more energy or more
computing power available.
ENVIRONMENTAL MONITORING APPLICATIONS
Environmental Monitoring Applications are used to evaluate
qualitatively and quantitatively the natural resources of a
determined area. These applications collect data, analyze and
follow continuously and systematically environmental variables
in order to identify current patterns and predict future trends [7].
Environmental monitoring provides information about the factors
influencing conservation, preservation, degradation and
environmental recovery. One might consider it a tool of
evaluation and control.
Wireless sensor networks can be used for performing
environmental monitoring in indoor locations such as a building
or a house or outdoors locations such as forests, lakes, deserts,
rivers, etc. Internal monitoring might be described as tracking the
variables in an indoor location. For example, one might deploy an
infrared camera to track motion in a room that is supposed to be
secure; if motion is detected an internal device might trigger an
alarm. Sometimes in order to detect and identify an event,
information from more than one sensor might be required. These
results are processed and compared with the signature of the event
of interest. In outdoor monitoring, there may be thousands of
sensors scattered over an area and when an event of interest
occurs such as temperature change, moisture change or CO2
increase the sensor might trigger the management events module
which in turn sends the observer a signal to notify him or her of
the event. Wireless sensors might be useful in a way that can save
money in deploying a sensor infrastructure such as described in
[8] where the authors were able to decrease the number of sensors
needed to monitor forest fires in comparison with a wired model.
In summary, the value of a wireless sensor network relies in its
ability to provide information over a large area in reply to the
questions put to users. The query mode is the most common
approach used. Another approach is the mode in which sensors
may remain waiting for some event to happen. By observing these
aspects we draw the first requirement (R1) of our middleware
model: The system must be able to function in two modes: query-driven
and event-oriented.
Depending on the application would be more convenient to access
a specific node or a specific property. For example, in internal
environmental monitoring, if one wants to know the temperature
of a determined room you will have to access the information
collected by a specific sensor, thus requiring unique node
addressing and identification. On the other hand, in external
environmental monitoring, sensor nodes do not need to be
uniquely identified, as in this kind of application the purpose is to
collect the value of a certain variable in a given area. From that
observation we extract the second requirement (R2): The system
must be able to address uniquely the sensor nodes and also by
attribute (property to be observed).
In some applications the mobility of sensor nodes must be taken
into account. For example, sensors scattered over a forest for
collecting dampness and temperature data are to be static, i.e. they
must not change its geographical location, while that placing
sensors in a river's surface for collecting data about its
contamination levels characterizes a mobile environment. Thus,
the third requirement (R3) of our middleware model is taking into
consideration mobility issues.
The sensing coverage area of a given wireless node is smaller
than its radio coverage. Besides, sensors operate in noisy
environments. To achieve a trustworthy sensory resolution a high
density of sensors is required. In some applications the size of the
coverage area leads to a great number of sensor nodes. A simple
application in the field of environmental monitoring such as
surveillance of oceans and forests requires from hundreds to
thousands of nodes. In other applications, like internal
environmental monitoring, the amount of nodes is limited by the
size of the area. Therefore, the fourth requirement (R4) is to take
into account the size of the network.
4
In external environmental monitoring, the nodes are spread in a
hostile region, where it is not possible to access them for
maintenance. The lifetime of each sensor node depends
exclusively on the little available energy for the node. To
conserve energy, the speed of the CPU and the bandwidth of the
RF channel (Radio Frequency) must be limited. This requirement
adds some restrictions in CPU performance, memory size, RF
bandwidth and in battery size. In applications where the sensors
are not spread in a hostile region it is possible to access them for
maintenance and the battery lifetime of each sensor does not
become a critical aspect. Finally, the fifth requirement (R5) is to
take into consideration the limited energy resources of each
sensor node.
MECHANISMS AND ABSTRACTIONS
This middleware model is comprised of three mechanisms and
one abstraction. The publish-subscribe paradigm is employed as
well as constraints and query languages and application filters to
meet R1, R2 and R5 requirements. The design patterns abstraction
is used to meet R2, R3 and R4 requirements.
4.1
Publish-Subscribe Paradigm
The SensorBus is a Message Oriented Middleware (MOM) that
employs the publish-subscribe paradigm. In this approach, a
component that generates events (producer) publishes the types of
events that will be available to other components (consumers) [9].
The consumer interested in a determined event "subscribes" to
this event, receiving from this moment on notifications about the
event "subscribed" to. These notifications are sent
asynchronously from producers to all interested consumers. The
MOM performs the functions of collecting producer's messages,
filtering and transforming such messages (when necessary) and
routing them to the appropriate consumers.
The publish-subscribe communication is anonymous,
asynchronous and multicast. Data are sent and received by
asynchronous broadcast messages, based in subject, independent
from identity and location of producers and consumers. This kind
of communication enlists desirable properties for sensor
networks; for example, this model saves energy while a given
node does not need to be waiting for a synchronous response to
proceed as it is in networks that implements end-to-end
connections, increasing the lifetime of the network. Furthermore,
as it also implements multicast, a group of sensor might be
formed regarding a specific application.
As a consequence, the adoption of the publish-subscribe paradigm
meets the R1 requirement, concerning the need for events and the
R2 requirement pertaining to attribute addressing. In addition it
also meets the R5 requirement related to energy saving.
4.2
Constraint and Query Languages
Constraint and Query languages are used to filter collecting data
by specifying restrictions in the values and preferences of the
attributes. A statement in these languages is a string that
represents an expression.
The constraint language only includes constants (values) and
operations over values. Values and operations with integer, float,
boolean and strings are allowed. The language admits several
types of expressions.
The expressions can be comparative: == (equality), !=
(inequality), >, >=, <, <=. For instance, Temperature < 36.6
means to consider data where the attribute Temperature is
less than 36.6 degrees Celsius.
The expressions can be boolean: AND, OR, NOT. For
example, Temperature >= 26.6 AND Temperature < = 36.6
implies to consider data where the value of the attribute
Temperature is between 26.6 and 36.6 degrees Celsius.
The expressions can be numerical with the mathematical
operators + (addition), - (subtraction), * (multiplication) and
/ (division).
The query language has its syntax based on a subgroup of the
conditional expression syntax SQL92 [10]. It is an extension of
the constraints language with new functions. This new language
embodies identifiers that can hold a constant value. A mapping
between identifiers and values is required. In the evaluation of an
expression, the occurrence of an identifier is replaced by its
associated value. The addition of new operators (between, like, in,
is, escape) allows submitting queries similar to those used in
databases compliant with SQL92. For example, queries of the
type -- Temperature between 26.6 and 36.6 -- are possible.
The constraint and query languages are intended to ease the work
programming of online applications. This type of application
access the information sent in real-time by the sensor nodes.
Thus, it completes the attendance of the R1 requirement on the
way of operation for query.
4.3
Application Filters
Filters are application specific software modules that deal with
diffusion and data processing [11]. Filters are provided before
deploying a sensor network. Each filter is specified using a list of
attributes to make possible the matching with the incoming data.
Filters are used to make internal aggregation of data, collaborative
signals processing, caching and tasks that control the data flow in
the sensor network [11]. In SensorBus, filters will be used to limit
the data flow in the network. A filter can be designed to restrict
the range of values of a determined attribute, for example the
application requires that the attribute Temperature has values
ranging between 20 and 30 degrees Celsius, the values outside
this particular range are of no interest. The filtering process
discards the unnecessary data reducing the flow between the
nodes. This decrease reduces the consumption of energy in
sensor nodes. Thus, it completes the attendance of the R5
requirement about the energy saving of the sensor nodes.
4.4
Design Patterns
Design patterns are descriptions of objects and communicating
classes that are customized to solve a general design problem
within a particular context [12]. It describes commonly recurring
design architectures extracted from the experience of one or more
domain specialists. A design pattern names, abstracts, and
identifies the key aspects of a common design structure that make
it useful for creating a reusable object-oriented design [12]. We
make use of design patterns in SensorBus project and the types
we have utilized are as follows:
5
The Observer pattern: Defines a one-to-many dependency
between objects so that when one object changes state, all of its
dependents are notified and updated automatically. We utilize this
pattern to implement the publish-subscribe mechanism.
The Interpreter pattern: Defines a representation for its grammar
along with an interpreter that uses the representation to interpret
sentences in the language. We make use of this pattern to
implement the constraint and query language.
The Facade pattern: Defines a unified (higher-level) interface to
a set of interfaces in a subsystem that makes the subsystem easier
to use. We use this pattern to implement the middleware high-level
primitives which will be available to developers.
The Mediator pattern: Defines an object that encapsulates how a
set of objects interact. Mediator promotes loose coupling by
keeping objects from referring to each other explicitly, and it lets
you vary their interaction independently.
The Adapter pattern: Converts the interface of a class into
another interface clients expect. Adapter lets classes work
together that couldn't otherwise because of incompatible
interfaces.
The Router pattern: Decouples multiple sources of input from
multiple sources of output to route data correctly without
blocking.
The design patterns Mediator, Adapter e Router are utilized to
implement the middleware message bus. The exchangeable
communication mechanism was written using these patterns. This
mechanism allows the utilization of any routing protocol designed
for sensor networks meeting as a result the requirements R2, R3
and R4.
SENSORBUS ARCHITECTURE
SensorBus is comprised of the following elements: an application
service, a message service and a context service as shown in
Figure 1.
Figure 1. Middleware architecture.
The following sections present each one of the services mentioned
by the means of UML (Unified Modeling Language) component
diagrams [13].
5.1
Application Service
The application service provides Application Programming
Interface (API) which simplifies application development. This
service is comprised of three components as shown in Figure 2:
Figure 2. Application Service Architecture.
DataBus: component providing a set of operations relating to bus
communication for consumers and producers. These operations
include: Announcement of data item (producer); to find a data
item (consumer); Announcement of data change (consumer);
Exclude data item (producer).
Filter: component providing a set of operations relating to data
filtering.
Language: component that implements the commands and the
constraint and query language interpreter.
5.2
Message Service
Message service is responsible for providing communication and
coordination for the distributed components, abstracting the
developer from these issues. This service also comprises three
components as is shown in figure 3:
Figure 3. Message Service Architecture.
Channel: Component designed to deal with the specific transport
implementations. Each instance of Channel represents a simple
system channel. The component Channel maintains the global
state information about the availability of channels and is also
responsible for exchanging channel's messages to the transport
implementation and vice versa.
Transport: The communication among the nodes is made through
a specific transport implementation such as sockets. Each
transport implementation communicates through a channel with a
message exchange server called Sinker. All transport
implementations have a common interface which is called
ITransport.
Sinker: Component responsible for routing messages among
instances of transport implementation, each instance
corresponding to an instance of Channel.
Channel
IChannel
Sinker
Transport
ITransport
Filter
DataBus
IDataBus
Language
ILanguage
NOS - JVM/KVM
Application/User
Application Service
Message
Context Service
6
5.3
Context Service
Inherently, an application running on a wireless sensor network
needs to capture information from the execution context, for
example, battery level, memory availability, bandwidth, location,
application specific information such as temperature and pressure,
etc. The middleware gets this information by interacting with
several heterogeneous sensors; for example, the level of energy
remaining on batteries can be obtained by executing an operating
system primitive, location can be acquired from various
communications technology such as GPS, infrared and RF. This
work does not take into consideration how the context sensing is
executed; it is assumed that each sensor provides an interface so
the middleware can use it to get the value of the resource of
interest.
The context service manages the heterogeneous sensors that
collect information from the environment. For each resource the
middleware manages, there is an adapter that interacts with the
physical sensor, processes its information thus obtaining the
information demanded by the application. Only resource adapters
that are necessary to the running application will be loaded to
avoid unnecessary spending of the node's scarce computing
power. Figure 4 shows an energy adapter interacting with an
energy sensor (an operating system primitive, in this example).
Figure 4. Context Service Architecture.
MIDDLEWARE ARCHITECTURE EXAMPLE
Figure 5 shows the sensor network communication architecture.
Each node in the field has the ability to collect data and send it to
the next sink node. The sink node can be a mobile node acting as
a data source or a fixed host computer (a PC). The user node
connects with the sink node through a conventional wireless LAN
(e.g. IEEE 802.11x).
Figure 5. Communication Architecture.
The services and components of SensorBus are distributed in
three distinct types of sensor nodes. The components DataBus,
Language, Channel and Transport are in the user node. The
Sinker component is in the sink node. The sensor nodes contains
the Channel and Transport components while filter component
and context service will only be loaded if the application requires
energy management and other resources such as memory and
bandwidth.
The development of an application using SensorBus consists in
coding the parts for the producer and consumer. The consumer
code runs in the user machine while the producer code runs in
sensor nodes. The minimum steps required for the use of
SensorBus are as follows:
1.
Create a new DataBus instance. A new transport
implementation is created by identifying a specific Sinker;
2.
Instantiate a producer or a consumer;
3.
Instantiate a "Channel" entity;
4.
Register the just created producer or consumer for the
channel; and
5.
The producer generates data items and places them into
Channel while the consumer finds and "crunches" those data.
SensorBus offers other functions that might be implemented, such
as listing the available channels, adding new channels and stop
receiving new channels.
The producer sensor code has to be implemented before setup of
the network. If it is not possible to retrieve the sensor for
maintenance, the attributes of the data sent will always be the
same. To overcome this obstacle the constraint and query
languages are used to add new queries that had not been initially
foreseen. These queries are sent by the interested consumer
(client) in the form of messages.
Figure 6. Middleware architecture example.
WLAN
Sink node
User node
Sensor field
Sensor nodes
EnergySensor
EnergyAdapter
EnergyEvent
IEvent
IAdapter
Data
<<user>>
<<application>>
Language
Filter
Channel
Transport
Sinker
Battery
Battery
Temperature
Sensor
IAdapter
Application
Service
Message
Service
Context Service
Temperature
Adapter
7
Filters are as well implemented in nodes. Soon after a producer is
instantiated, a filter is also instantiated and registered for a new
channel.
Figure 6 shows the components that may be active in a given
moment. Although most of the components are the same for a
given application, different settings may occur on the context
service. The figure shows only two adapters running at the same
time and interacting with its associated sensors (temperature and
battery). Distinct sensors can be used depending on the physical
measurement to be taken and the type of computing resource to be
managed.
IMPLEMENTATION ISSUES
The testbed setup for SensorBus evaluation consists of Intel-based
equipment equipped with 802.11b cards. The sink node is a
centrino-based Dell Lattitude notebook, the sensor nodes are
deployed in handheld computers HP iPAQ running Linux
operating system on Intel XScale processor. The sensor nodes are
placed at various locations in the Electrical Engineering
Department Building (about 40m60m) at Federal University of
Par. Linksys Wireless LAN cards are used working in the DCF
mode with a channel bandwidth of 11Mbps. In the building, there
is interference from IEEE 802.11 access points (AP) and other
electronic devices.
7.1
Working Prototype
For our working prototype, we have chosen the Java platform as
our implementation technology because of its broad installed base
and to ensure compatibility with most hardware platforms. The
KVM (Kilobyte Virtual Machine) [14] is being used due to its
freely available source-code and its designed targeting towards
small limited-resources devices similar to the sensor nodes of this
work. The SensorBus API and its constraint and query language
are being coded as Java classes. Due to issues regarding
efficiency, the code that will run in sensor nodes is being
implemented as native code.
An object serialization mechanism was implemented because
KVM does not support this facility. Serialization mechanism
converts an object and its state into a byte stream allowing this
object to be moved over a network or persisted in a local file
system. Object recovery is performed through another mechanism
called deserialization. Other Java technologies as J2SE Java 2
Standard Edition utilize this kind of facility to support the
encoding of objects into a stream of bytes while protecting private
and transient data. Serialization is used in distributed
programming through sockets or Remote Method Invocation
(RMI). We have coded a semiautomatic serialization in order to
store the state of the objects. To achieve this, we had to define a
series of new interfaces and classes.
One of the most critical problems with serialization is security
because when an object is converted in a byte stream any attacker
equipped with properly sniffer software can intercept and access
it; in this case even private attributes can be accessed without any
special technique. To tackle this issue, secure protocols as HTTPS
(Secure HyperText Transport Protocol) or serialization encryption
can be used.
7.2
Simulation
Having implemented this working prototype, this research now
intends to do performance evaluation by using the very well
known tool NS Network Simulator [15]. We plan to integrate
the SensorBus middleware with NS in a way that the sensor nodes
will plug into NS in order to provide real data for feeding the
simulator model. To do so, an execution environment will be
added to the simulator. This environment will run as a sole UNIX
process and will be plugged to the NS protocol stack through a
sensor agent.
The sensor agent is actually a NS agent responsible for
connecting an execution environment instance to the NS protocol
stack. The communication takes place through a pair of UDP
(User Datagram Protocol) sockets. Incoming packets are
encapsulated in NS packets and transmitted through the simulated
sensor network. Parameters that need to be known to the protocol
stack are placed in the header of the NS packet while the rest of
the information is added to the payload of the NS packet.
Similarly, outgoing packets are retrieved from the NS packets and
sent to the execution environment to be processed.
It will be necessary to provide a mechanism to synchronize the
execution and simulation environment since they run in distinct
times. Simulations will be performed using a NS Directed
Diffusion transport implementation [5] for wireless sensor
networks.
RELATED WORKS
In [16] an overall description of the challenges involving
middleware for wireless sensor networks is presented focusing on
the restraint aspects of these systems.
Cougar [17] and SINA [18], Sensor Information Networking
Architecture, provide a distributed database interface for wireless
sensor networks that use a query language to allow applications to
run monitoring functions. Cougar manages the power by
distributing the query among the sensor nodes to minimize energy
required in data gathering. SINA adds low-level mechanisms to
build hierarchical clustering of sensors aiming at efficient data
aggregation and also provides protocols which limit the
rebroadcast of similar information to neighbor's nodes.
AutoSec [19], Automatic Service Composition, manages resources
of the sensor networks by providing access control for
applications to ensure quality of service. This approach is very
similar to conventional middleware technology but the techniques
to collect resource information are suitable for wireless sensor
networks.
DSWare [20] provides service abstraction similar to AutoSec, but
instead of having a service provided by only one sensor node, the
service is supplied by a group of neighbor's nodes.
Smart Messages Project [21] proposes a distributed computing
model based on migration of executing units. Smart messages are
migratory units containing data and code. The goal of Smart
Messages Project is to develop a computing model and systems
architecture to Networks Embedded Systems (NES).
EnviroTrack [22] is a middleware for object-based distributed
systems that lifts the abstraction level for programming
8
environmental monitoring applications. It contains mechanisms
that abstract groups of sensors into logical objects.
Impala [23] exploits mobile code techniques to alter the
middleware's functionality running on a sensor node. The key to
energy-efficient management in Impala is that applications are as
modular and concise as possible so little changes demands fewer
energy resources.
MiLAN [24] was developed to allow dynamic network setup to
meet the applications performance requirements. The applications
represent its requests by the means of specialized graphics which
incorporates changes due to applications needs.
In [25], an adaptative middleware is proposed to explore the
commitment between resource spending and quality during
information collecting. The main goal is to decrease the
transmissions among sensor nodes without compromising the
overall result.
Every one of those middleware proposals is designed to make
efficient use of wireless sensor networks; they do not support free
exchange of the transport mechanism. More specifically, most of
those approaches are not capable of altering the routing protocol
to meet different application requirements.
CONCLUDING REMARKS
As was demonstrated, application development is closely related
to wireless sensor network design. Each communication
mechanism provided by a determined routing protocol is
application specific, e.g. it is designed to meet some application
specific requirement. We suggest that the utility of the
middleware for wireless sensor networks is supported by
decoupling the communication mechanism from the programming
interfaces and also by capability of using more than one
communication mechanism to address the requirements of larger
number of applications. We have shown that SensorBus, a sensor
network middleware that we are developing to meet these goals,
can aid the development of different types of sensor network
applications.
REFERENCES
[1]
P. Rentala, R. Musunuri, S. Gandham and U. Saxena, Survey
on Sensor Networks, Technical Report, University of Texas,
Dept. of Computer Science, 2002.
[2]
G. -C. Roman, A. L. Murphy, and G. P. Picco, Software
Engineering for Mobility: A Roadmap. In The Future of
Software Engineering 22
nd
Int. Conf. On Software
Engineering (ICSE2000), pages 243-258. ACM Press, May
2000.
[3]
J. Pottie and W. J. Kaiser, Embedding the internet wireless
integrated network sensors, Communications of the ACM,
vol. 43, no. 5, pp. 51-58, May 2000.
[4]
W. Heinzelman, A. Chandrakasan and H. Balakrishnan,
Energy-efficient communication protocol for wireless micro
sensor networks. Proceedings of the 33
rd
Annual Hawaii
International Conference on System Sciences, Pages 3005-3014
, 2000.
[5]
C. Intanagonwiwat, R. Govindan, and D. Estrin, Directed
diffusion: A scalable and robust communication paradigm
for sensor networks. In Proceedings of the ACM/IEEE
International Conference on Mobile Computing and
Networking, pages 56-67, Boston, MA, USA, Aug. 2000.
[6]
T. Sameer, N. B. Abu-Ghazaleh and Heinzelman W. A
Taxonomy of Wireless Micro-Sensor Network Models.
Mobile Computing and Communications Review, Volume 1,
Number 2, 2003.
[7]
Guia de Chefe Brazilian Institute of Environment
(IBAMA)
http://www2.ibama.gov.br/unidades/guiadechefe/guia/t-1corpo
.htm. December, 2004.
[8]
B. C. Arrue, A. Ollero e J. R. M. de DIOS, An intelligent
system for false alarm reduction in infrared forest-fire
detection, IEEE Intelligent Systems, vol. 15, pp. 64-73, 2000.
[9]
G. Couloris, J. Dollimore, e T. Kindberg Distributed
Systems: Concepts and Design. Third edition. Addison-Wesley
, 2001.
[10]
SQL92 Database Language SQL July 30, 1992.
http://www.cs.cmu.edu/afs/andrew.cmu.edu/usr/shadow/www
/sql/sql1992.txt
[11]
J. Heidemann, F. Silva, C. Intanagonwiwat, R. Govindan, D.
Estrin and D. Ganesan. Building efficient wireless sensor
networks with low-level naming. In Proceedings of the
Symposium on Operating Systems Principles, pages 146-159,
Chateau Lake Louise, Banff, Alberta, Canada, October 2001.
[12]
E. Gamma, R. Helm, R. Johnson e J. Vlissides, Design
Patterns. Addison-Wesley, 1995.
[13]
J. Rumbaugh, I. Jacobson and G. Booch. The Unified
Modeling Language Reference Manual. Addison Wesley,
1998.
[14]
KVM The K Virtual Machine Specification.
http://java.sun.com/products/kvm/
, August 2004.
[15]
UCB/LBNL/VINT Network Simulator NS (Version 2).
http//www.isi.edu./nsnam/ns/, August 2004.
[16]
K. Rmer, O. Kasten and F. Mattern. Middleware
Challenges for Wireless Sensor Networks. Mobile
Computing and Communications Review, volume 6, number
2, 2002.
[17]
P. Bonnet, J. Gehrke and P. Seshadri. Querying the Physycal
World. IEEE Personal Communication, 7:10-15, October
2000.
[18]
C. Srisathapornphat, C. Jaikaeo and C. Shen. Sensor
Information Networking Architecture, International
Workshop on Pervasive Computing (IWPC00), Toronto
Canada, August 2000.
[19]
Q. Han and N. Venkatasubramanian. AutoSec: An integrated
middleware framework for dynamic service brokering. IEEE
Distributed Systems Online, 2(7), 2001.
9
[20]
S. Li, S. Son, and J. Stankovic. Event detection services
using data service middleware in distributed sensor
networks. In Proceedings of the 2
nd
International Workshop
on Information Processing in Sensor Networks, April 2003.
[21]
Smart Messages project. March, 2003.
http://discolab.rutgers.edu/sm.
[22]
T. Abdelzaher, B. Blum, Q. Cao, D. Evans, J. George, S.
George, T. He, L. Luo, S. Son, R. Stoleru, J. Stankovic and
A. Wood. EnviroTrack: Towards an Environmental
Computing Paradigm for Distributed Sensor Networks.
Technical report, Department of Computer Science,
University of Virginia, 2003.
[23]
T. Liu and M. Martonosi. Impala: A middleware system for
managing autonomic, parallel sensor systems. In ACM
SIGPLAN Symposium on Principles and Practice of Parallel
Programming (PPoPP03), June 2003
[24]
A. Murphy and W. Heinzelman, MiLan: Middleware linking
applications and networks, Technical Report TR-795,
University of Rochester, 2002.
[25]
X. Yu, K. Niyogi, S. Mehrotra and N. Venkatasubramanian,
Adaptive middleware for distributed sensor networks, IEEE
Distributed Systems Online, May 2003. | message service;publish-subscribe paradigm;message-oriented middleware model;environmental monitoring applications;application filters;context service;Middleware;constraint and query languages;design pattern;wireless sensor networks;application service;wireless sensor network |
177 | Seven Cardinal Properties of Sensor Network Broadcast Authentication | We investigate the design space of sensor network broadcast authentication . We show that prior approaches can be organized based on a taxonomy of seven fundamental proprieties, such that each approach can satisfy at most six of the seven proprieties. An empirical study of the design space reveals possibilities of new approaches, which we present in the following two new authentication protocols : RPT and LEA. Based on this taxonomy, we offer guidance in selecting the most appropriate protocol based on an application's desired proprieties. Finally, we pose the open challenge for the research community to devise a protocol simultaneously providing all seven properties. | INTRODUCTION
Due to the nature of wireless communication in sensor networks,
attackers can easily inject malicious data messages or alter the content
of legitimate messages during multihop forwarding. Sensor
network applications thus need to rely on authentication mechanisms
to ensure that data from a valid source was not altered in
transit. Authentication is thus arguably the most important security primitive in sensor network communication. Source authentication
ensures a receiver that the message originates from the
claimed sender, and data authentication ensures that the data from
that sender was unchanged (thus also providing message integrity).
When we use the term authentication we mean both source and
data authentication.
Broadcast authentication is a challenging problem. Furthermore,
it is of central importance as broadcasts are used in many applications
. For example, routing tree construction, network query, software
updates, time synchronization, and network management all
rely on broadcast. Without an efficient broadcast authentication algorithm
, the base station would have to resort to per-node unicast
messages, which does not scale to large networks. The practical-ity
of many secure sensor network applications thus hinges on the
presence of an efficient algorithm for broadcast authentication.
In point-to-point authentication, authentication can be achieved
through purely symmetric means: the sender and receiver would
share a secret key used to compute a cryptographic message authentication
code (MAC) over each message [15, 23]. When a message
with a valid MAC is received, the receiver can be assured that
the message originated from the sender. Researchers showed that
MACs can be efficiently implemented on resource-constrained sensor
network nodes [31], and find that computing a MAC function
requires on the order of 1ms on the computation-constrained Berkeley
mote platform [11, 14].
Authentication of broadcast messages in sensor networks is much
harder than point-to-point authentication [1]. The symmetric approach
used in point-to-point authentication is not secure in broadcast
settings, where receivers are mutually untrusted. If all nodes
share one secret key, any compromised receiver can forge messages
from the sender.
In fact, authenticated broadcast requires an asymmetric mechanism
[1]. The traditional approach for asymmetric mechanisms
is to use digital signatures, for example the RSA signature [34].
Unfortunately, asymmetric cryptographic mechanisms have high
computation, communication, and storage overhead, making their
usage on resource-constrained devices impractical for many applications
.
The property we need is asymmetry, and many approaches had
been suggested for sensor network broadcast authentication. However
, objectively comparing such approaches and selecting the most
appropriate one for a given application is a non-trivial process, especially
for an engineer not specialized in security. The goal of
this work is to provide guidance for sensor network broadcast authentication
by presenting a systematic investigation of the design
space. We arrive at a taxonomy of seven fundamental properties,
and present protocols that satisfy all but one property. The list of
the desired properties are:
147
1. Resistance against node compromise,
2. Low computation overhead,
3. Low communication overhead,
4. Robustness to packet loss,
5. Immediate authentication,
6. Messages sent at irregular times,
7. High message entropy.
If we remove any one of the above requirements, a viable protocol
exists. Table 1 gives an overview of the seven approaches for
addressing each case. We show that existing protocols, or small
modifications thereof, make up for five of the seven possible cases.
We also introduce novel approaches for addressing the final two
cases: the RPT protocol to authenticate messages sent at regular
times, and the LEA protocol to authenticate low-entropy messages.
Finally, we pose the open challenge to the research community to
design a broadcast authentication mechanism that satisfies all seven
properties.
Outline.
The paper is organized as follows. We introduce the
taxonomy of seven properties and discuss how current approaches
can be organized based on our taxonomy in Section 2. Section 3 describes
the TESLA broadcast authentication protocol and presents
several extensions to increase its efficiency and robustness to DoS
attacks. In Section 3.3, we introduce RPT, a novel protocol that
authenticates synchronous messages. In Section 4, we introduce
LEA, a novel protocol for efficient network broadcast authentication
for low-entropy messages. Implementation and evaluation is
discussed in Section 5. Finally, we present related work in Section
6 and our conclusions and future work in Section 7.
TAXONOMY OF EXISTING PROTOCOLS
In this section, we discuss the seven properties of broadcast authentication
and describe possible approaches if we were to leave
out one of the seven requirements.
Node Compromise.
Since sensor nodes are not equipped with
tamper-proof or tamper-resistant hardware, any physical attacker
would be able to physically compromise a node and obtain its cryptographic
keys [5]. Since it is unlikely that tamper-proof hardware
will be deployed on sensor motes in the near future, secure sensor
network protocols need to be resilient against compromised nodes.
However, if the nodes are deployed in a physically secured area
(such as an attended army base), or if the application itself is resilient
against malicious nodes, node compromise might not be an
issue.
If we assume no compromised nodes, all parties could maintain a
network-wide key that is used to generate and verify a single Message
Authentication Code (MAC) per message. If instead one can
assume a low number of compromised nodes, a simple approach
exists which uses a different key for each receiver and adds one
MAC per receiver to each message. Unfortunately, this approach
does not scale to large networks since a 10-byte MACs per receiver
would result in prohibitively large messages. To trade off communication
overhead with security, researchers propose a multi-MAC
approach [3]. In their scheme, the sender chooses some number of
random MAC keys, and distributes a subset of keys to each node.
Every message carries one MAC with each key (assuming 10 bytes
per MAC),
1
which adds a substantial overhead. If an attacker compromises
a node, it can only forge a subset of MACs, thus with
high probability, other nodes will be able to detect the forgery with
their subset of keys. A variant of this approach was used to prevent
malicious injection of messages in sensor networks [36, 37].
Computation Overhead. Sensor nodes have limited computation
resources, so an ideal protocol would have low computation overhead
for both sender and receiver. However, there exist scenarios
where computation might not be a particularly critical issue. For
example, it is conceivable that certain applications would only require
authenticated broadcasts for a small number of packets. In
such a case, the application engineer might be willing to allow for
a small number of intensive computations.
If we admit a high computation overhead, we can use digital signatures
. RSA today requires at least a 1024-bit modulus to achieve
a reasonable level of security, and a 2048-bit modulus for a high
level of security [18]. ECC can offer the same level of security
using 160-bit keys and 224-bit keys, respectively. Recent advancement
in ECC signature schemes on embedded processors can perform
signature verification using 160-bit ECC keys in about 1 second
[10]. Although this represents a dramatic improvement over
earlier public key cryptographic schemes [2, 4, 21], signature verification
is still 3 orders of magnitude slower than MAC verification
, while signature generation is 4 orders of magnitude slower.
While we expect future sensor nodes to have more powerful processors
, the energy constraints dictated by the limited battery resources
will always favor the use of more efficient symmetric cryptographic
primitives.
Communication Overhead.
Energy is an extremely scarce resource
on sensor nodes, and as a result, heavily influences the design
of sensor network protocols. In particular, radio communication
consumes the most amount of energy, and thus protocols with
high communication overhead are avoided if possible. However, in
some settings (e.g., powered nodes) energy consumption is not an
issue. Thus an authentication protocol that requires high communication
overhead would be acceptable.
If we admit a high communication overhead, we can leverage
efficient one-time signatures that are fast to compute, but require
on the order of 100200 bytes per signature. Examples include the
Merkle-Winternitz (MW) signature which requires 230 bytes per
signature [25, 26, 35] (we describe the MW signature in detail in
Section 4.1), or the HORS signature, which requires around 100
bytes per signature [33]. The MW signature requires around 200
one-way function computations to verify a signature (which corresponds
to roughly 200 ms computation time on a sensor node),
while the HORS signature only requires 11 one-way function computations
. The disadvantage of the HORS signature is that the public
key is about 10 Kbytes,
2
whereas the public key for the MW
signature is only 10 bytes. Signature generation is very efficient
for both mechanisms, and can be reduced to a single hash function
computation assuming a lookup table for the cryptographic values.
We leverage the MW signature to construct the LEA broadcast authentication
mechanism, which we present in Section 4.
Message Reliability. Our fourth property is message reliability.
Reliable message delivery is the property of a network such that
valid messages are not dropped. Ultimately, message reliability is
an applications issue - some applications require message reliability
, while others do not.
1
An 80-bit MAC value achieves security comparable to a 1024-bit
RSA signature [18].
2
This is prohibitively large, since each public key of a one-time
signature can be used to authenticate only a single message.
148
Desired property
Approach if property is relaxed
Resistance to node compromise
Network-wide key
Low computation overhead
Digital signatures
Low communication overhead
One-time signatures
Robustness to packet loss
HORS + chaining of public keys
Immediate authentication
TESLA
Messages sent at irregular times
RPT, described in Section 3.3
High message entropy
LEA, described in Section 4.2
Table 1: Overview of desired properties of broadcast authentication and approaches. The left column presents the desired property,
and the right column presents the approach that achieves all properties but relaxes the property in its left column. The text describes
each approach in more detail.
If we have perfect message reliability, we can achieve efficient
and immediate authentication by using the HORS signature in a
special construction that combines multiple public keys [28]. In
this construction, a public key is still 10 Kbytes, but a single public
key can be used to authenticate almost arbitrarily many messages,
as the public values are incrementally updated as signed messages
are sent. The communication and computation costs are the same
as for the HORS signature: 1 ms for signature generation, 11 ms
for signature verification, and 100 bytes for the signature. Note that
in such a scheme, an attacker can start forging HORS signatures if
many packets are dropped.
Authentication Delay.
Depending on the application, authentication
delay may influence the design of the sensor network protocol
. For time-critical messages such as fire alarms, the receiver
would most likely need to authenticate the message immediately.
However, authentication delay is typically acceptable for non-time-critical
messages.
If we admit an authentication delay and assume that the receivers
are loosely time synchronized with the sender, the TESLA broadcast
authentication protocol only adds a 10 byte MAC and an optional
10 byte key to each message [31]. We review the TESLA
protocol in detail in Section 3.1. To achieve a low computation
overhead in the case of infrequent messages sent at unpredictable
times, we need to extend the TESLA protocol to enable fast authentication
of the keys in the one-way key chain. In Section 3.2
we present a more efficient key chain construction that enables efficient
authentication in this case. Simultaneously, our approach
protects TESLA against denial-of-service attacks by sending bogus
key chain values.
Synchronous Messages.
Some applications send synchronous
messages at regular and predictable times. For example, a key revocation
list might be sent to the entire network everyday at noon.
We extend the TESLA protocol to provide efficient and immediate
authentication for synchronous messages sent at regular and
predictable times. We name the protocol RPT (Regular-Predictable
Tesla), and we present its details in Section 3.3.
Message Entropy. So far, all schemes we describe authenticate
unpredictable messages with high entropy. However, in practice,
many protocols might only communicate with low-entropy messages
. For example, in many applications, there are only a handful
of valid commands that a base station can send to a sensor node.
Therefore, these command packets could be considered as low-entropy
messages.
If we can assure a low upper bound on message entropy, we can
leverage one-time signatures in constructions that provide message
recovery, where the message is not hashed but directly encoded in
the signature. We describe our new LEA protocol in Section 4.
For messages with merely a single bit of entropy, we could employ
the following optimization using two hash chains. One hash
chain would correspond to messages of '1', while another would
correspond to messages of '0'. The sender first sends the last value
of both chains to the receivers in an authenticated manner (e.g., using
one-time signatures or digital signatures). Next, whenever the
sender wishes to send a '0', it would reveal the next value in the
hash chain corresponding to '0'. The same is done for the hash
chain corresponding to '1'. The receiver needs to keep state of the
most recent value it received for each hash chain. Consequently, the
receiver can easily verify the authenticity of new values by hashing
them and comparing them against the most recent value of each
hash chain.
BROADCAST AUTHENTICATION WITH THE
TESLA PROTOCOL
In this section, we first present a brief overview of the TESLA
protocol [29], the recommended broadcast authentication protocol
if immediate authentication is not required. We improve the
TESLA broadcast authentication protocol to provide efficient authentication
for infrequent messages sent at unpredictable times
(Section 3.2). In Section 3.3, we describe RPT, further modification
of TESLA that provides immediate authentication for synchronous
messages sent at regular and predictable times.
3.1
TESLA Overview
The TESLA protocol provides efficient broadcast authentication
over the Internet which can scale to millions of users, tolerate packet
loss, and support real time applications [30]. Currently, TESLA is
in the process of being standardized in the MSEC working group
of the IETF for multicast authentication.
TESLA has been adapted for broadcast authentication in sensor
networks, the resulting protocol is called the TESLA broadcast
authentication protocol [30, 31]. TESLA is used to secure routing
information [17], data aggregation messages [12, 32], etc.
We now overview the TESLA protocol, a detailed description
is available in our earlier paper [31]. Broadcast authentication requires
a source of asymmetry, such that the receivers can only verify
the authentication information, but not generate valid authentication
information. TESLA uses time for asymmetry. TESLA
assumes that receivers are all loosely time synchronized with the
sender up to some time synchronization error
, all parties agree
on the current time. Recent research in sensor network time synchronization
protocols has made significant progress, resulting in
time synchronization accuracy in the range of s [6, 7], which is
much more accurate than the loose time synchronization required
by TESLA. By using only symmetric cryptographic primitives,
TESLA is very efficient and provides practical solutions for resource-constrained
sensor networks. Figure 1 shows an example of TESLA
authentication, and here is a sketch of the basic approach:
149
M j
M j+1
M j+2
M j+3
M j+4
M j+5
M j+6
K
i
-1
K
i
K
i
+1
K
i
+2
F
(Ki)
F
(Ki+1)
F
(Ki+2)
F
(Ki+3)
Interval i
- 1
Interval i
Interval i
+ 1
Interval i
+ 2
time
Figure 1: At the top of the figure is the one-way key chain (using the one-way function F). Time advances left-to-right. At the bottom
of the figure, we can see the messages that the sender sends in each time interval. For each message, the sender uses the current time
interval key to compute the MAC of the message.
The sender splits up the time into time intervals of uniform
duration. Next, the sender forms a one-way chain of self-authenticating
keys, by selecting key K
N
of interval N at random
, and by repeatedly applying a one-way hash function F
to derive earlier keys. A cryptographic hash function, such
as SHA-1 [27], offers the required properties. The sender
assigns keys sequentially to time intervals (one key per time
interval). The one-way chain is used in the reverse order of
generation, so any key of a time interval can be used to derive
keys of previous time intervals. For example, assuming
a disclosure delay of 2 time intervals, key K
i
will be used to
compute MACs of broadcast messages sent during time interval
i, but disclosed during time interval i
+ 2. The sender
defines a disclosure delay for keys, usually on the order of a
few time intervals. The sender publishes the keys after the
disclosure time.
The sender attaches a MAC to each message, computed over
the data, using the key for the current time interval. Along
with the message, the sender also sends the most recent key
that it can disclose. In the example of Figure 1, the sender
uses key K
i
+1
to compute the MAC of message M
j
+3
, and
publishes key K
i
-1
assuming a key disclosure delay of two
time intervals.
Each receiver that receives the message performs the following
operation. It knows the schedule for disclosing keys and,
since the clocks are loosely synchronized, can check that the
key used to compute the MAC is still secret by determining
that the sender could not have yet reached the time interval
for disclosing it. If the MAC key is still secret, then the receiver
buffers the message. In the example of Figure 1, when
the receiver gets message M
j
+3
, it needs to verify that the
sender did not yet publish key K
i
+1
, by using the loose time
synchronization and the maximum time synchronization error
. If the receiver is certain that the sender did not yet
reach interval i
+ 3, it knows that key K
i
+1
is still secret, and
it can buffer the packet for later verification.
Each receiver also checks that the disclosed key is correct
(using self-authentication and previously released keys) and
then checks the correctness of the MAC of buffered messages
that were sent in the time interval of the disclosed key.
Assuming the receiver knows the authentic key K
i
-2
, it can
verify the authenticity of key K
i
-1
by checking that F
(K
i
-1
)
equals K
i
-2
. If K
i
-1
is authentic, the receiver can verify
the authenticity of buffered packets sent during time interval
i
- 1, since they were authenticated using key K
i
-1
to
compute the MAC.
One-way chains have the property that if intermediate keys are
lost, they can be recomputed using later keys. So, even if some
disclosed keys are lost due to packet loss or jamming attacks, a
receiver can recover the key from keys disclosed later and check
the authenticity of earlier messages.
Along with each message M
i
, the sender broadcasts the TESLA
authentication information. The broadcast channel may be lossy,
but the sender would need to retransmit with an updated MAC key.
Despite loss, each receiver can authenticate all the messages it receives
.
3.2
Reducing Verification Overhead of
TESLA
Even though TESLA provides a viable solution for broadcast
authentication in sensor networks, many challenges still remain.
We describe the remaining challenges below and propose extensions
and new approaches to address these challenges.
Some applications broadcast messages infrequently at unpredictable
times and the receivers may need to authenticate messages
immediately. For example, a fire alarm event is infrequent and
needs to be quickly distributed and authenticated. Unfortunately,
when messages are infrequent, due to the one-way chain approach
to verify the authenticity of keys, a receiver may need to compute
a long chain of hash values in order to authenticate the key which
could take several tens of seconds for verification. Such verification
delays the message authentication significantly and may consume
significant computation and energy resources. This approach also
introduces a Denial-of-Service (DoS) attack: an attacker sends a
bogus key to a receiver, and the receiver spends several thousands
of one-way function computations (and several seconds) to finally
notice that the sent key was incorrect.
One approach is to periodically release TESLA keys and hence
the work for verification of an infrequent message would be distributed
over time. However, this approach wastes energy for periodic
broadcast of TESLA keys. In the same vein, a sender can
publish several keys in a packet to reduce the effect of DoS attacks
by requiring a receiver to perform a small number of one-way
function computations to incrementally authenticate each key of the
one-way chain. An advantage of this approach is that it makes the
DoS attack described above less attractive to an attacker, as a receiver
would need to follow the one-way chain for a short interval
only to detect a bogus key.
Another approach to counteract the slow and expensive verification
problem is to use a Merkle hash tree [24] instead of a one-way
chain to authenticate TESLA keys. This approach has been suggested
in another context [13]. For N keys, the tree has height
d
= log
2
(N) and along with each message, the sender sends d values
to verify the key. Despite the logarithmic communication cost,
this is still too large for most sensor networks: consider a network
where we switch to a different hash tree every day, and we need a
150
k
2
k
5
k
8
k
11
k
14
k
17
k
20
k
23
k
1
k
4
k
7
k
10
k
13
k
16
k
19
k
22
k
0
k
3
k
6
k
9
k
12
k
15
k
18
k
21
F
v
0
-7
= F(v
0
-3
|| v
4
-7
)
v
0
-3
v
4
-7
v
01
v
23
v
45
v
67
v
0
v
1
v
2
v
3
v
4
v
5
v
6
v
7
Figure 2: Hash tree constructed over one-way chains of TESLA keys.
key resolution of 1 second. The 86,400 keys that we need in one
day require a tree of height 17. Assuming a hash output of 10 bytes,
the sender would need to consequently add 170 bytes to each message
for authentication (17 nodes at 10 bytes each). This is far too
much for most sensor networks, where nodes typically communicate
with messages shorter than 100 bytes. Splitting the load up into
two messages is not a viable approach, because of the usually high
packet loss rates in sensor networks. The receiver would only need
to compute O
(log(N)) operations for verification, 17 hash function
computations in our example which requires around 17ms on current
sensor nodes.
To reduce the bandwidth overhead, we design a different approach
that achieves lower message size at the cost of higher verification
computation. our approach is to combine one-way chains
with hash trees. Consider the structure that Figure 2 shows. We
construct a hash tree over short one-way chains. If each one-way
chain has a length of k, the verification cost is expected to be k
/2+
log
(N/k) (it is at most k + log(N/k)), and the communication cost
is log
(N/k). For a given upper bound on the verification time, we
can thus minimize the communication overhead. Consider an upper
bound on the verification time of approximately 500ms. We can
set k
= 2
9
= 512, thus the hash tree will have 8 levels, requiring 80
bytes per packet, making this an attractive approach for many applications
.
An alternative approach would be to construct a hash tree over
the one-way key chain, where the every k'th key will be a leaf node
of the hash tree (for example, in Figure 2, the value k
0
would be
derived from the previous leaf node k
0
= F(v
1
)). The advantage
of this approach is that a sender would not need to send the hash
tree values along with a message, as a value can be authenticated
by following the one-way chain to the last known value. However,
if the sender did not send out any message during an extended time
period, that authentication would be computationally expensive and
thus the sender can choose to also send the hash tree nodes along
for fast verification. This approach would also prevent DoS attacks
since the verification is very efficient.
M
i
M
i
K
i
-1
K
i
K
i
+1
F
(Ki)
F
(Ki+1)
Interval i
- 1
Interval i
time
T
i
-1
T
i
T
i
+1
Figure 3: This figure shows authentication of one message in
the RPT protocol. Message M
i
= MAC
K
i
(M
i
) , and message
M
i
= M
i
,K
i
.
3.3
RPT: Authenticating Messages Sent at Regular
and Predictable Times
As described in our taxonomy in Section 2, one additional property
in the design space of broadcast authentication is to authenticate
asynchronous messages sent at irregular and unpredictable
times. All protocols described so far can achieve this property.
However, if we were to remove this requirement, new possible approaches
exist that can only authenticate messages sent at regular
and predictable times, yet satisfy all of the other cardinal properties
defined in our taxonomy. In this section, we introduce our design
of one such protocol called RPT, a modification of the TESLA
protocol.
In practice, many protocols send synchronous messages at regular
and predictable times. The plaintext of these messages are often
known by the sender a priori. In particular, messages containing
meta-data are especially well-suited for this type of communication
. For example, a base-station often performs key update or time
re-synchronization at a preset time of day. In these examples, the
sender knows exactly what message needs to be sent at a particular
time, but the protocol dictates that such messages cannot be sent
until a pre-specified time.
Consider an application that broadcasts a message every day at
noon to all nodes. If we use standard TESLA with one key per
151
day, it would take one day to authenticate the message, since the
receivers would need to wait for the disclosed key one day later.
On the other hand, if we use many keys, for example, one key per
second, it would require 86
,400 keys per day (not using the optimization
we presented in the previous section), and a sensor node
would require an expected time of 43 seconds to verify the authenticity
of the key. Hence, if messages are sent at very regular time
intervals, we can streamline TESLA to immediately authenticate
these messages.
The RPT protocol (Regular-Predictable TESLA) achieves immediate
authentication for messages sent at regular and predictable
times. Consider a message that needs to be sent at times T
i
=
T
0
+ i D. The sender creates a one-way key chain, and assigns
one key to each time interval of duration D. We assume that the
sender knows the content of the message M
i
to be broadcast at time
T
i
by time T
i
-, where is the maximum network broadcast propagation
delay plus the maximum time synchronization error. At
time T
i
- , the sender broadcasts message MAC
K
i
(M
i
) , and at
time T
i
the sender broadcasts M
i
,K
i
. As soon as the receiver receives
the first message, it needs to verify the safety condition that
key K
i
is still secret, given its current time and the maximum time
synchronization error. When receiving the second message, the receiver
first verifies the key K
i
. If the key is correct it verifies the
MAC, and if the MAC is correct it is assured that M
i
is authentic.
Note that this approach does not exhibit any authentication delay,
as the receiver can immediately authenticate M
i
immediately after
reception.
At first glance, it may appear that RPT is susceptible to a denial-of
-broadcast attack, where an attacker sends a large number of
forged MACs around the time the legitimate is sent out. This problem
had been studied and addressed in previous work [16]. However
, it is not easy to evaluate how well this works in practice.
BROADCAST AUTHENTICATION WITH ONE-TIME SIGNATURES
Another way to achieve asymmetric authentication is through the
use of one-time signatures. A one-time signature is much faster to
generate and verify than general purpose signatures, but the private
key associated with the signature can be used to sign only a single
message, otherwise the security degrades and an attacker could
forge signatures. Unlike TESLA, time synchronization is not necessary
and authentication is immediate. Moreover, one-time signatures
achieve non-repudiation in addition to authentication, which
enables a node to buffer a message and retransmit it later. The receiver
of the retransmitted message can still authenticate the message
.
One-time signatures are advantageous in applications with infrequent
messages at unpredictable times, as they do not add computation
to the receiver based upon the time at which the message
is received. This makes them resilient to many forms of DoS attacks
. We now present an overview of one-time signatures, and
then present our LEA broadcast authentication protocol for authentication
of low-entropy messages in Section 4.2.
4.1
One-Time Signatures Overview
The Merkle-Winternitz signature was first drafted by Merkle [25,
26], and was later also used by Even, Goldreich, and Micali [8],
and more recently also by Rohatgi for efficient stream authentication
[35]. We briefly describe the basic principle of the Merkle-Winternitz
signature.
A Merkle-Winternitz signature relies on efficient one-way functions
to construct a DAG (directed acyclic graph) to encode a signature
. Each edge between two vertices (v
1
v
2
) in the graph
represents an application of the one-way function, where the value
of the end node is the result of the one-way function applied to the
beginning node (v
2
= F(v
1
), where F represents the one-way function
). End nodes with multiple incoming edges take on the value
of the hash of the concatenation of predecessor nodes. The initial
values of the graph represent the private key, and the final value
represents the public key.
To achieve a secure one-time signature, the property of the signature
encoding is that an attacker would have to invert at least one
one-way function to sign any other value (i.e., forge a signature).
We now discuss an example of a signature graph and signature
encoding. Figure 4(a) depicts the one-time signature. A one-way
hash chain of length 4 can be used to encode the values 0
- 3. For
this signature chain, we will use the convention that the 1st value
s
3
in the chain encodes the value 3, the second 2, etc.
The signer derives the value s
3
from a randomly generated private
key K
priv
by using a Pseudo-Random Function (PRF), e.g.,
s
3
= PRF
K
priv
(0).
3
To prevent signature forgery (as we will explain
later), the sender also creates a checksum chain c
0
...c
3
, deriving
value c
0
also from the private key, e.g., c
0
= PRF
K
priv
(1),
and again using the one-way function to derive the other values,
e.g., c
1
= F(c
0
). The application of the one-way function on s
0
and c
3
forms the public key: K
pub
= F(s
0
|| c
3
). To sign value i,
where 0
i 3, the signer uses values s
i
and c
i
as the signature.
To verify the signature s
i
and c
i
, the receiver follows the one-way
chains and recomputes the public key as follows, with F
0
(x) = x:
K
pub
= F(F
i
(s
i
) || F
3
-i
(c
i
))
A signature is correct if the recomputed value matches the public
key For example, consider a signature on value 2: s
2
and c
2
. To
verify, the receiver checks that K
pub
= F(F(F(s
2
)) || F(c
2
)).
An attacker who wishes to forge a signature is forced to invert at
least one one-way function (since the indices of the checksum chain
run in direction opposite to the signature chain). Assuming the one-way
function is secure, an attacker cannot invert the function to
forge a signature, hence, the signature is secure. In practice, we can
use a secure cryptographic hash function for our one-way function,
but for increased efficiency we use a block cipher in hash mode, for
example the commonly used Matyas-Meyer-Oseas mode [22].
Using two chains achieves a secure one-time signature, but does
not scale well to sign a large number of bits. If we use two chains,
a signature on 32 bits would require a chain 2
32
values long, which
has a very high overhead to generate and verify. Instead, if more
than one chain is used, each chain can encode some number of bits
of the signature. For example, one could encode an 8 bit number by
using four chains of length 4 to encode two bits in each chain. The
public key is derived from the last value on all the chains. However,
in this scheme, we would still need an additional 4 chains of length
4 to encode the values in the opposite direction to prevent forgeries.
The Merkle-Winternitz signature reduces the number of checksum
chains, in that the redundant checksum chains do not encode
the actual value, but instead encode the sum of the values on the signature
chains. As explained in detail by Merkle [25,26], the checksum
chain encodes the sum of all values in the signature chains.
Assuming k signature chains that sign m bits each, the maximum
sum would be k
(2
m
-1), thus the checksum chains would encode
3
We use a block cipher to implement the PRF efficiently. A block
cipher is a good PRF as long as we do not use the PRF to compute
more than O
(2
n
) operations with the same key, where n is the
blocksize in bits. Since we only perform a few operations, the block
cipher is a secure and efficient PRF.
152
K
priv
K
pub
s
0
s
1
s
2
s
3
c
3
c
2
c
1
c
0
(a) Simple one-time signature to sign 2 bits.
F
K
priv
K
pub
s
0
,0
s
0
,1
s
0
,2
s
0
,3
s
1
,0
s
1
,1
s
1
,2
s
1
,3
s
2
,0
s
2
,1
s
2
,2
s
2
,3
s
3
,0
s
3
,1
s
3
,2
s
3
,3
c
0
,3
c
0
,2
c
0
,1
c
0
,0
c
1
,3
c
1
,2
c
1
,1
c
1
,0
(b) Merkle-Winternitz one-time signature.
This construction can sign 8 bits.
Figure 4: This figure illustrates the Merkle-Winternitz one-time signature.
log
2
k
(2
m
- 1) bits, providing for a significant savings. This approach
still ensures that an attacker would have to invert at least
one one-way function to forge a signature.
Using signature chains with 4 values, a signature on n bits will
then require n
/2 signature chains. Since each chain encodes up to
the value 3, the checksum chain at most needs to encode the value
(n/2) 3 as the total sum; thus, the checksum chains need to sign
log
2
(n/2 3) bits. If we also use checksum chains with 4 values,
each checksum chain can again sign 2 bits and we need log
2
(n/2
3
)/2 checksum chains. Figure 4(b) shows an example of such a
signature for signing 8 bits. Since the four signature chains can
at most encode the number 3, the total sum is at most 4
3 = 12.
Thus we only need 2 additional checksum chains to encode the 4
bits. Again, the indices in the checksum chain run opposite to the
indices in the signature chain, to ensure that an attacker would have
to invert at least one one-way function to forge a different signature.
For the specific case of signing 80 bits, researchers suggest using
chains of length 16 to encode 4 bits per chain [35]. Thus, we need
20
= 80/4 signature chains, and the checksum chains would need
to encode at most values 0
...300(= 20 15), which will require 9
bits, which again requires 3 checksum chains (where the third chain
only requires 2 values to sign a single bit).
4
We now compute the computation overhead of signature verification
. On average, signature verification requires following half
the signature chains, which requires 8 one-way function computations
. In the case of signing 80 bits with 20 signature chains,
this will result in 160 one-way function computations. On average,
the checksum chains require 16 one-way function computations,
adding up to a total of 176 computations.
4.2
LEA: Authentication of Low-Entropy Messages
If messages have high entropy, the one-time signature is still
quite large in size. For example, if messages have 80 bits or more
of entropy, the signer can hash the message before signing it. Using
4
We could also use 2 signature chains with 18 values each, as 18
2
=
324, saving one checksum chain.
the construction we discussed in Section 4.1, signing an 80-bit hash
value would yield a 230 bytes signature (or 184 bytes if we assume
8 byte long hash chain values). Unfortunately, this is still too large
for current sensor networks.
However, for messages with lower entropy, one-time signatures
can be very effective. We thus present the LEA (Low-Entropy
Authentication) protocol. The LEA protocol is based on Merkle-Winternitz
one-time signatures, and periodically pre-distributes onetime
public keys to receivers, and the sender uses the corresponding
private keys to sign messages.
The Merkle-Winternitz one-time signature is efficient for signing
small numbers of bits. For example, assuming chains of length 16,
to sign a message of n bits, we would need n
/4 signature chains.
Thus we need to encode log
2
(n/415) bits in the checksum chains,
hence requiring log
2
(n/4 15)/4 additional checksum chains.
For signing 8 bits, the signature would require 2 signature chains
and 2 additional checksum chains to encode the sum ranging from
0
...30, which would require 32 bytes assuming 8 byte values.
Since communication cost is a premium, we could use a single
checksum chain of length 30 to encode the checksum, thus saving
8 bytes. Hence, the total size of the authentication information
would be 24 bytes.
Since the size of the signature depends on the number of bits
being signed, this method is preferable for situations where the
message is a simple time critical command, such as an alarm, or
a preset command. For example, to sign 128 different commands,
we would only need one signature chain with 16 values, one signature
chain with 8 values, and one checksum chain with 22 values.
Assuming 8 byte values, the total signature length is 24 bytes.
In some applications it may be possible to use a lossy compression
algorithm to compress and quantize the data for the signature.
This would allow the message to contain uncompressed data, but
the attacher would only be able to change the message to a small
degree. This could be helpful in commands which set the sensitivity
of a motion sensor and the administrator is willing to allow
a small error in the sensitivity which is actually received on the
device.
One of the main challenges of using one-time signatures is to dis-153
tribute one authentic public key for each signature to the receivers.
Without an authentic public key an attacker could inject it's own
public key and one-time signatures. This problem is easier than
the original problem of general broadcast authentication because
the public keys can be distributed far ahead of time at a predictable
time.
There are several methods by which this may be achieved. The
simplest would be to distribute a set of k public keys to each receiver
at bootstrap and these keys would be usable for the first k
messages. If the lifetime of the devices compared to k is small,
then the devices will not have to be re-bootstrapped.
In general, the number of total messages is unknown. Thus, we
design a mechanism to efficiently replenish authentic public keys
after their use. We leverage the RPT protocol for this purpose.
Nodes store a number of authentic public keys. The sender uses up
one one-time signature (or one private key) per message it broadcasts
. With this approach, all receivers can immediately authenticate
the message. Periodically, the sender sends a RPT message at
a regular time with new one-time public keys to replenish the used-up
public keys at receivers. Since each public key is only 10 bytes
long, this is an efficient approach.
4.3
Chaining Merkle-Winternitz Public Keys
The above scheme illustrates an effective way to use TESLA in
conjunction with Merkle-Winternitz signatures to provide fast and
efficient authentication. The only drawback of using the Merkle-Winternitz
one-time signature is that the public key can only be
used once. Therefore when a TESLA authenticated message is
sent at the beginning of the day authenticating k Merkle-Winternitz
public keys, the sender and receiver are limited to only being able
to authenticate k messages that day. The tradeoff is that choosing a
large k uses up receiver memory resources.
To circumvent this problem, rather than sending a fixed number
of messages per interval, the public keys can be chained together
in such a way that if more messages are needed they can be sent to
the receiver and authenticated immediately.
In this approach, the sender generates a large number of public
and private keys for one-time signatures, labeling the public keys
P
0
,P
1
,...,P
n
. These public keys are then combined, such that verification
of one signature will automatically authenticate the public
key of the next signature:
V
0
= P
0
V
1
= H(P
1
|| V
0
)
...
V
i
= H(P
i
|| V
i
-1
)
...
V
n
= H(P
n
|| V
n
-1
)
In this approach, the sender only needs to send the value V
n
authenticated
with TESLA. The sender subsequently uses the private
key that corresponds to the public key P
n
to sign a message, and
sends value V
n
-1
along with the message. From the signature, the
receiver can compute the public key P
n
, and together with the value
V
n
-1
the receiver can authenticate the public key and V
n
-1
based
on the trusted value V
n
. Now that the receiver trusts value V
n
-1
, the
next public key P
n
-1
can be authenticated in the same way.
This approach has the drawback that the message to be authenticated
also needs to carry the value V
n
-1
increasing the message
size by 810 bytes, and that message loss prevents later messages
to be authenticated. We propose to use a hybrid approach: send k
public keys authenticated with RPT each day, along with one value
V
n
. If the sender needs to send more than k authenticated messages,
it can then use the chained public keys after the first k messages.
0uA
500uA
1000uA
1500uA
2000uA
22
23
24
25
26
97 bits
60 bits
32 bits
Power consumed (uA) vs. chain lengths
Figure 5: The power consumption for an MSP430 sensor
node receiving and validating Merkle-Winternitz signatures for
varying signature chain lengths.
IMPLEMENTATION AND PERFORMANCE EVALUATION
Figure 5 illustrates the amount of energy required for using a
Merkle-Winternitz signature for signing 32 bits, 60 bits, and 97
bits. In this example, the sensor is an 16-bit TI MSP430 processor
running at 1 MHz, which can compute an 8-byte hash in approximately
5ms using RC5. This processor uses up 0.28 A per ms,
and 3.8 A per byte received. Shown are the overall power consumption
for five different chain lengths, 2
2
, 2
3
, 2
4
, 2
5
, 2
6
, and
2
7
. Table 2 shows the power consumption, validation times, and
communication overhead for signing 60 bits with varying length
chains.
We implemented the PRF using the Helix stream cipher [9].
Unlike RC5, this cipher is not patented. It also features an efficient
MAC construction which we use in our implementation of
TESLA. The PRF is computed by using the input to the PRF as
the key in encryption mode, and using the keystream as the output
of the PRF. In this implementation, it takes about 8 ms to compute
an 8-byte PRF. Since the signature generation requires comparable
amount of computation as verification, generation of a 64-bit signature
takes about 1.2 seconds and verification takes about 1 second
in our un-optimized implementation. However, in this scheme, the
public keys are generated in advance, so the sender must compute
twice as many hashes because it must recompute the hashes when
he wishes to actually compute a signature instead of simply generating
the public key. This still makes it feasible for a sensor-node
to act as the base station in our implementation, but generating a
large amount of public keys becomes costly. The implementation
is about 4k in size, 2k for the Helix assembly code, and 2k for the
Merkle-Winternitz code (with code for both generation and validation
).
RELATED WORK
The TESLA protocol is a viable mechanism for broadcast authentication
in sensor networks [31]. Unfortunately, this approach
introduces an authentication delay and thus does not provide immediate
authentication of messages which is necessary in applications
154
2
2
2
3
2
4
2
5
2
6
2
7
Power-cons (A)
1126.7
823.1
707.2
717.5
858.3
1163.2
Auth-time (ms)
332.5
442.5
680.0
1042.5
1762.5
2960.0
Overhead (bytes)
272
184
136
112
96
88
Table 2: Efficiency for signing a 60 bit value using Merkle-Winternitz one-time signature.
with real-time requirements. Moreover, the TESLA approach has
some denial-of-service vulnerabilities, which we address in this paper
.
Liu and Ning subsequently improved the efficiency of bootstrapping
new clients, using multiple levels of one-way key chains [20].
This work also discussed the DoS attack explained in Section 3.2.
Liu et al. also outlines a potential approach to authenticate commitment
messages with Merkle hash trees [19].
Several researchers have investigated the use of asymmetric cryptographic
techniques in sensor networks. Unfortunately, the overhead
is too high to warrant use of such techniques for per-packet
broadcast authentication. Such schemes were discussed in Section
2 in the context of protocols with high computation overhead.
CONCLUSION
We have studied viable and efficient solutions for efficient broadcast
authentication in sensor networks. This problem is challenging
due to the highly constrained nature of the devices and the unpredictable
nature of communication in many environments. Since the
authentication of broadcast messages is one of the most important
security properties in sensor networks, we need to study viable approaches
for a variety of settings. We establish a set of properties
of broadcast authentication: security against compromised nodes,
low computation and communication cost, immediate authentication
(with no receiver delay), authentication of unpredictable messages
with high entropy, and robustness to packet loss. We present
a viable protocol for each case where we relax one property, and
pose the open challenge to find a protocol that satisfies all properties
REFERENCES
[1] D. Boneh, G. Durfee, and M. Franklin. Lower bounds for
multicast message authentication. In Advances in Cryptology
-- EUROCRYPT '01, pages 434450, 2001.
[2] M. Brown, D. Cheung, D. Hankerson, J. Lopez Hernandez,
M. Kirkup, and A. Menezes. PGP in constrained wireless
devices. In Proceedings of USENIX Security Symposium,
August 2000.
[3] R. Canetti, J. Garay, G. Itkis, D. Micciancio, M. Naor, and
B. Pinkas. Multicast security: A taxonomy and some
efficient constructions. In INFOCOMM'99, pages 708716,
March 1999.
[4] J. Deng, R. Han, and S. Mishra. A performance evaluation of
intrusion-tolerant routing in wireless sensor networks. In
Proceedings of IEEE Workshop on Information Processing in
Sensor Networks (IPSN), April 2003.
[5] J. Deng, C. Hartung, R. Han, and S. Mishra. A practical
study of transitory master key establishment for wireless
sensor networks. In Proceedings of the First IEEE/CreateNet
Conference on Security and Privacy for Emerging Areas in
Communication Networks (SecureComm), 2005.
[6] Jeremy Elson, Lewis Girod, and Deborah Estrin.
Fine-grained network time synchronization using reference
broadcasts. In Proceedings of Symposium on Operating
Systems Design and Implementation (OSDI), December
2002.
[7] Jeremy Elson and Kay Romer. Wireless sensor networks: A
new regime for time synchronization. In Proceedings of
Workshop on Hot Topics In Networks (HotNets-I), October
2002.
[8] S. Even, O. Goldreich, and S. Micali. On-line/off-line digital
signatures. In Advances in Cryptology -- CRYPTO '89,
volume 435, pages 263277, 1990.
[9] Niels Ferguson, Doug Whiting, Bruce Schneier, John Kelsey,
Stefan Lucks, and Tadayoshi Kohno. Helix: Fast encryption
and authentication in a single cryptographic primitive. In
Proceedings of the International Workshop on Fast Software
Encryption (FSE 2003), 2003.
[10] V. Gupta, M. Millard, S. Fung, Y. Zhu, N. Gura, H. Eberle,
and S. C. Shantz. Sizzle: A standards-based end-to-end
security architecture for the embedded internet. In
Proceedings of the Third IEEE International Conference on
Pervasive Computing and Communication (PerCom), 2005.
[11] Jason Hill, Robert Szewczyk, Alec Woo, Seth Hollar,
David E. Culler, and Kristofer S. J. Pister. System
architecture directions for networked sensors. In Proceedings
of Architectural Support for Programming Languages and
Operating Systems (ASPLOS IX), pages 93104, 2000.
[12] Lingxuan Hu and David Evans. Secure aggregation for
wireless networks. In Workshop on Security and Assurance
in Ad hoc Networks, January 2003.
[13] Yih-Chun Hu, Adrian Perrig, and David B. Johnson. Packet
leashes: A defense against wormhole attacks in wireless
networks. In Proceedings of IEEE INFOCOM, April 2003.
[14] J. M. Kahn, R. H. Katz, and K. S. Pister. Mobile networking
for smart dust. In Proceedings of ACM/IEEE Conference on
Mobile Computing and Networking (MobiCom), August
1999.
[15] C. Karlof, N. Sastry, and D. Wagner. TinySec: A link layer
security architecture for wireless sensor networks. In ACM
SenSys, November 2004.
[16] Chris Karlof, Naveen Sastry, Yaping Li, Adrian Perrig, and
J. D. Tygar. Distillation codes and applications to dos
resistant multicast authentication. In Proceedings of the
Symposium on Network and Distributed Systems Security
(NDSS), November 2004.
[17] Chris Karlof and David Wagner. Secure routing in wireless
sensor networks: Attacks and countermeasures. In
Proceedings of First IEEE International Workshop on Sensor
Network Protocols and Applications, May 2003.
[18] A. Lenstra and E. Verheul. Selecting cryptographic key sizes.
Journal of Cryptology, 14(4):255293, 2001.
[19] D. Liu, P. Ning, S. Zhu, and S. Jajodia. Practical broadcast
authentication in sensor networks. In Proceedings of The 2nd
Annual International Conference on Mobile and Ubiquitous
Systems: Networking and Services
, November 2005.
[20] Donggang Liu and Peng Ning. Efficient distribution of key
chain commitments for broadcast authentication in
155
distributed sensor networks. In Proceedings of Network and
Distributed System Security Symposium (NDSS), pages
263276, February 2003.
[21] David Malan, Matt Welsh, and Michael Smith. A public-key
infrastructure for key distribution in TinyOS based on elliptic
curve cryptography. In Proceedings of IEEE International
Conference on Sensor and Ad hoc Communications and
Networks (SECON), October 2004.
[22] S. Matyas, C. Meyer, and J. Oseas. Generating strong
one-way functions with cryptographic algorithm. IBM
Technical Disclosure Bulletin, 27:56585659, 1985.
[23] A. Menezes, P. van Oorschot, and S. Vanstone. Handbook of
Applied Cryptography. CRC Press, 1997.
[24] R. Merkle. Protocols for public key cryptosystems. In
Proceedings of the IEEE Symposium on Research in Security
and Privacy, pages 122134, April 1980.
[25] R. Merkle. A digital signature based on a conventional
encryption function. In Advances in Cryptology -- CRYPTO
'87, pages 369378, 1988.
[26] R. Merkle. A certified digital signature. In Advances in
Cryptology -- CRYPTO '89, pages 218238, 1990.
[27] National Institute of Standards and Technology (NIST),
Computer Systems Laboratory. Secure Hash Standard.
Federal Information Processing Standards Publication (FIPS
PUB) 180-2, February 2004.
[28] A. Perrig. The BiBa one-time signature and broadcast
authentication protocol. In Proceedings of ACM Conference
on Computer and Communications Security (CCS), pages
2837, November 2001.
[29] A. Perrig, R. Canetti, J. D. Tygar, and D. Song. Efficient
authentication and signature of multicast streams over lossy
channels. In Proceedings of the IEEE Symposium on
Research in Security and Privacy, pages 5673, May 2000.
[30] A. Perrig, R. Canetti, J. D. Tygar, and D. Song. The TESLA
broadcast authentication protocol. RSA CryptoBytes,
5(Summer), 2002.
[31] Adrian Perrig, Robert Szewczyk, Victor Wen, David Culler,
and J. D. Tygar. SPINS: Security protocols for sensor
networks. In Proceedings of ACM Conference on Mobile
Computing and Networks (MobiCom), pages 189199, 2001.
[32] Bartosz Przydatek, Dawn Song, and Adrian Perrig. SIA:
Secure information aggregation in sensor networks. In
Proceedings of the First ACM International Conference on
Embedded Networked Sensor Systems (SenSys 2003), pages
255265, November 2003.
[33] Leonid Reyzin and Natan Reyzin. Better than BiBa: Short
one-time signatures with fast signing and verifying. In
Proceedings of Conference on Information Security and
Privacy (ACISP), July 2002.
[34] R. Rivest, A. Shamir, and L. Adleman. A method for
obtaining digital signatures and public-key cryptosystems.
Communications of the ACM, 21(2):120126, February
1978.
[35] P. Rohatgi. A compact and fast hybrid signature scheme for
multicast packet. In Proceedings of the 6th ACM Conference
on Computer and Communications Security, pages 93100.
ACM Press, November 1999.
[36] F. Ye, H. Luo, S. Lu, and L. Zhang. Statistical en-route
filtering of injected false data in sensor networks. In
Proceedings of IEEE INFOCOM, March 2004.
[37] S. Zhu, S. Setia, S. Jajodia, and P. Ning. An interleaved
hop-by-hop authentication scheme for filtering false data in
sensor networks. In Proceedings of IEEE Symposium on
Security and Privacy, pages 259271, May 2004.
156
| Sensor Network;Broadcast Authentication;Taxonomy |
178 | Significance of gene ranking for classification of microarray samples | Many methods for classification and gene selection with microarray data have been developed. These methods usually give a ranking of genes. Evaluating the statistical significance of the gene ranking is important for understanding the results and for further biological investigations, but this question has not been well addressed for machine learning methods in existing works. Here, we address this problem by formulating it in the framework of hypothesis testing and propose a solution based on resampling. The proposed r-test methods convert gene ranking results into position p-values to evaluate the significance of genes. The methods are tested on three real microarray data sets and three simulation data sets with support vector machines as the method of classification and gene selection. The obtained position p-values help to determine the number of genes to be selected and enable scientists to analyze selection results by sophisticated multivariate methods under the same statistical inference paradigm as for simple hypothesis testing methods. | INTRODUCTION
AN important application of DNA microarray technologies
in functional genomics is to classify samples
according to their gene expression profiles, e.g., to classify
cancer versus normal samples or to classify different types
or subtypes of cancer. Selecting genes that are informative
for the classification is one key issue for understanding the
biology behind the classification and an important step
toward discovering those genes responsible for the distinction
. For this purpose, researchers have applied a number of
test statistics or discriminant criteria to find genes that are
differentially expressed between the investigated classes
[1], [2], [3], [4], [5], [6], [7]. This category of gene selection
methods is usually referred to as the filtering method since
the gene selection step usually plays the role of filtering the
genes before doing classification with some other methods.
Another category of methods is the so-called wrapper
methods, which use the classification performance itself as
the criterion for selecting the genes and genes are usually
selected in a recursive fashion [8], [9], [10], [11], [12]. A
representative method of this category is SVM-RFE based
on support vector machines (SVM), which uses linear SVM
to classify the samples and ranks the contribution of the
genes in the classifier by their squared weights [10].
All these selection methods produce rankings of the
genes. When a test statistic, such as the t-test, F-test, or
bootstrap test, is used as the criterion, the ranking is
attached by p-values derived from the null distribution of
the test statistic, which reflects the probability of a gene
showing the observed difference between the classes simply
due to chance. Such p-values give biologists a clear
understanding of the information that the genes probably
contain. The availability of the p-value makes it possible to
investigate the microarray data under the solid framework
of statistical inference and many theoretical works have
been built based on the extension of the concept of p-value,
such as the false discovery rate (FDR) study [13].
Existing gene selection methods that come with p-values
are of the filtering category and are all univariate methods.
To consider possible combinatorial effects of genes, most
wrapper methods adopt more sophisticated multivariate
machine learning strategies such as SVMs and neural
networks. These have been shown in many experiments
to be more powerful in terms of classification accuracy.
However, for gene selection, the gene rankings produced
with these methods do not come with a measure of
statistical significance. The ranking is only a relative order
of genes according to their relevance to the classifier. There
is no clear evaluation of a gene's contribution to the
classification. For example, if a gene is ranked 50th
according to its weight in the SVM classifier, it is only
safe to say that this gene is perhaps more informative
than the gene ranked at 51st. However, there is no way to
describe how significant it is and there is no ground to
compare the information it contains with a gene also
ranked as 50th by the same method in another experiment
. This nature of relative ranking makes it hard to
interpret and further explore the gene selection results
achieved with such advanced machine learning methods.
For example, it is usually difficult to decide on the proper
number of genes to be selected in a specific study with
such machine learning methods. Most existing works
usually select a subgroup of genes with some heuristi-cally
decided numbers or thresholds [6], [8], [10]. The
advanced estimation techniques, such as FDR, based on
significance measures do not apply for such methods.
Evaluating the statistical significance of the detected
signal is the central idea in the paradigm of statistical
inference from experimental data. There should be an
equivalent study on those machine-learning-based multivariate
gene selection methods which produce ranks
according to their own criteria. Strategies such as permutation
can be utilized to assess the significance of the
classification accuracy, but they do not measure the
significance of the selected genes directly. Surprisingly, this
question has not been addressed by the statistics or
bioinformatics community in existing literature. We therefore
propose that the question be asked in this way: For an
observed ranking of genes by a certain method, what is the
probability that a gene is ranked at or above the observed
position due to chance (by the same method) if the gene is,
in fact, not informative to the classification? ("Being
informative" is in the sense of the criteria defined or
implied by the classification and ranking method. It may
have different meanings for different methods.) We call this
problem the significance of gene ranking or feature ranking.
We raise this problem in this paper and describe our
strategy toward a solution. The problem is discussed in the
context of microarray classification of cancer samples, but
the philosophy and methodology is not restricted to this
scenario.
THE SIGNIFICANCE OF RANKING PROBLEM
Suppose a
microarray
data
set
contains
m
cases
X
fx
i
; i
1;
; m
g. Each case is characterized by a
vector
of
the
expression
values
of
n
genes
x
i
x
i1
; x
i2
;
; x
in T
2 R
n
; i
1;
; m
. Each gene is a
vector of their expression values across the cases g
j
x
1j
; x
2j
;
; x
mj T
and we denote the set of all genes
a s
G
fg
j
; j
1;
; n
g. E a c h c a s e h a s a l a b e l
y
i
f 1; 1g, i 1;
; m
indicating the class it belongs to
among the studied two classes, e.g., normal versus cancer,
or two subtypes of a cancer, etc. Among the n genes, usually
some are informative to the classification and some are not,
but we do not know which genes are informative and which
are not. For the convenience of description, we denote the
set of informative genes as I
G
and that of the uninformative
genes as U
G
. To simplify the problem, we assume that
I
G
\ U
G
and I
G
[ U
G
G:
1
The goal is to build a classifier that can predict the classes ^
y
i
of the cases from x
i
and, at the same time, to identify the
genes that most likely belong to I
G
. The former task is called
classification and the latter one is called gene selection. In
the current study, we assume that there has already been a
ranking method RM which produces a ranking position for
each gene according to some criterion assessing the gene's
relevance with the classification:
r
j
rank g
j
j x
i
; y
i
; i 1;
; m
f
g ; j 1;
; n
2
and we do not distinguish the specific types of the RM. The
ranking is obtained based on the samples, thus r
j
is a
random variable. The significance-of-ranking problem is to
calculate the following probability:
p
r
j
4
P rank
g
j
r
j
jg
j
2 U
G
;
3
i.e., given a gene is uninformative to the classification
(according to RM's criterion), what is the probability that it
is ranked at or above the observed ranking position by the
ranking method? We call this probability the p-value of a
gene's ranking position or, simply, position p-value.
This significance-of-ranking problem is distinct from
existing statistics for testing differentially expressed genes
in several aspects. It applies to more complicated multivariate
classification and gene selection methods. Even
when it is applied on gene ranking methods based on
univariate hypothesis tests like t-test, the position p-value is
different with the t-test p-value by definition. The t-test
p-value of a gene is calculated from the expression values of
this gene in the two sample sets by comparing with the
assumed null distribution model when the gene is not
differentially expressed in the two classes. The position
p-value of a single gene, however, is defined on its context,
in the sense that its value depends not only on the
expression of this gene in the samples, but also on other
genes in the same data set. A gene with the same expression
values may have different position p-values in different
data sets. The null distributions of ranks of uninformative
genes are different in different data sets and, therefore, the
foremost challenge for solving the problem is that the null
distribution has to be estimated from the specific data set
under investigation.
THE R-TEST SCHEME
The significance-of-ranking problem is formulated as a
hypothesis testing problem. The null hypothesis is that the
gene is not informative or g
j
2 U
G
, the alternative hypothesis
is g
j
2 I
G
(the gene is informative) since we have
assumption (1) and the statistic to be used to test the
hypothesis is the ranking position. As in standard hypothesis
testing, the key to solving the problem is to obtain the
distribution of the statistic under the null hypothesis, i.e.,
the distribution of the ranks of uninformative genes:
P r
jg 2 U
G
:
4
For the extreme case when I
G
(all the genes are
uninformative) and the ranking method is not biased, it is
obvious that the null distribution is uniform. In a real
microarray data set, however, usually some genes are
informative and some are not, thus the uniform null
distribution is not applicable. The null rank distribution in
a practical investigation depends on many factors, including
the separability of the two classes, the underlying
number of informative genes, the power of the ranking
method, the sample size, etc. The characteristics of these
factors are not well understood in either statistics or biology
and, therefore, we have to estimate an empirical null
distribution from the data set itself.
We propose to tackle this problem in two steps. First, we
identify a set of putative uninformative genes (PUGs) which
are a subset of U
G
. This is possible in practice because,
although we do not know U
G
, discovering a number of genes
that are irrelevant to the classification is usually not hard in
most microarray data sets. We denote the identified subset as
ZHANG ET AL.: SIGNIFICANCE OF GENE RANKING FOR CLASSIFICATION OF MICROARRAY SAMPLES
313
U
0G
. The next step is to estimate the null distribution of ranks
with the ranking positions of these PUGs.
From the original data set, we resample L new data sets
and apply the ranking method on each of them, producing,
for each gene L, ranks r
l
j
; l
1;
; L
. In our implementa-tion
, we randomly resample half of the cases in the original
data set each time. Other resampling schemes such as
bootstrapping can also be used to obtain similar results
according to our experiments (data not shown). Since,
usually, the size of U
G
is much larger than that of I
G
(i.e.,
most genes are uninformative), if a gene tends to always be
ranked at the bottom in the L rankings, it is very likely that
the gene is an uninformative one. Thus, we define r
j
as the
average position of gene j in the L rankings,
r
j
1
L
X
L
l
1
r
l
j
; j
1;
; n
5
and select the bottom k genes with the largest r
j
as the
PUGs to form U
0G
, where k is a preset number. We rewrite
U
0G
as U
0k
G
when we need to emphasize the role of k in this
procedure. We assume that U
0k
G
is a random sample of U
G
and r
l
j
, l 1;
; L
for g
j
2 U
0k
G
is a random sample from the
underlying null distribution of the ranks of uninformative
genes. Thus, we have k L observations of the null
distribution of ranks from which we estimate the null
distribution using a histogram. More sophisticated non-parametric
methods can be adopted to fit the distribution if
necessary. We denote the estimated null distribution as
^
P r
jg 2 U
G
P
histogram
r
l
j
jg
j
2 U
0k
G
:
6
With this estimated null distribution, the calculation of
the position p-value is straightforward: For gene i with
ranking position r
i
,
^
p
r
i
^
P
r
r
i
jg 2 U
G
P
histogram
r
l
j
r
i
jg
j
2 U
0k
G
: 7
Applying this on all the genes, we convert the ranking list to
a list of position p-values reflecting the significance of the
genes' being informative to the classification.
This whole procedure for estimating the p-value of a
ranking is illustrated in Fig. 1. We name this scheme the
r-test and call the position p-value thus calculated the r-test
p-value.
COMPENSATION FOR BIAS IN THE ESTIMATED PUGs
One important problem with the r-test scheme is the
selection of the PUGs. Ideally, the ranks used to select
PUGs and the ranks used to estimate null distribution
should be independent. However, this is impractical in that
314
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,
VOL. 3,
NO. 3,
JULY-SEPTEMBER 2006
Fig. 1. The diagram showing the principle of r-test (pr-test). (a) A number (L) of new data sets is resampled from the original data set. A ranking is
generated for each new data set with the ranking method, resulting in a total of L rankings. (b) The genes are ordered by their average positions of
the L rankings. The horizontal axis is genes by this order and the vertical axis is ranking position in the resample experiments. For each gene, its
ranking positions in the L experiments are drawn in a box plot, with a short dash in the middle showing the median. (c) From the bottom (rightmost) of
the ordered gene list, k genes are selected as putative uninformative genes or PUGs. The box-plots of the ranks of the k PUGs are illustrated in this
enlarged image. The null distribution of ranks of uninformative genes is to be estimated from these ranks. (d) An example null distribution estimated
from PUGs. For each gene on the microarray, its actual ranking is compared with the null distribution to calculate the position p-value of the gene
being noninformative. For mr-test and tr-test, the average position in the L rankings is used in the calculation of the p*-value. In tr-test, the PUGs are
not selected from the bottom of the ranking, but rather from the genes with the largest t-test p-values.
there is actually only one data set available. In our strategy,
the same ranks are used to estimate both PUGs and the
distribution of their ranks. This is an unplanned test in the
sense that the PUGs are defined after the ranks are observed
[14]. The PUGs in U
0k
G
are not an unbiased estimate of U
G
. In
the extreme case when k is small, uninformative genes that
are ranked higher are underrepresented and the ranks of
U
0k
G
might represent only a tail of the ranks of U
G
on the
right. If this happens, it will cause an overoptimistic
estimation of the r-test p-values and result in more genes
being claimed significant. Therefore, we propose two
modified strategies to compensate for the possible bias.
4.1
Modified r-Test with Average Ranks
In (7), the position p-value is calculated by comparing the
rank r
i
of gene g
i
obtained from the whole data set with the
estimated null distribution. Intuitively, when the sample size
is small, one single ranking based on a small sample set can
have a large variance, especially when all or most of the genes
are uninformative. We propose replacing the rank r
i
by the r
i
defined in (5), i.e., to use the average position of gene g
i
in the
L
resampling experiments as the estimate of the true rank,
and to calculate the position p-value with this estimated rank
rather than the single observation of the rank:
p
r
i
^
P
r
r
i
jg 2 U
G
P
histogram
r
l
j
r
i
jg
j
2 U
0k
G
: 8
The estimated null distribution is the distribution of single
ranks of putative uninformative genes, but the r
i
to be
compared to it is the averaged rank and (8) is no longer a
p-value in the strict sense. Therefore, we name it p*-value
instead and call this modified r-test the mr-test for
convenience. Ideally, if a gene is informative to the
classification and the ranking method can consistently rank
the gene according this information on both the whole data
set and on the resampled subsets, we'll have
r
i
r
i
; for g
i
2 I
G
;
9
in which case the p*-value will be equivalent to the original
r-test p-value for these genes. In practice, when the sample
size is small and the signal in some informative genes are
not so strong, we always have r
i
r
i
when r
i
is small;
therefore, the estimated ranks move toward the right on the
rank distribution comparing with the single-run ranks,
which, as an effect, can be a compensation to the bias in the
estimated null distribution. (For the genes ranked in the
lower half of the list, the averaged rank will move leftward,
but these genes are not of interest to us in this study since
we assume only a minority of the genes can be informative.)
4.2
Independent Selection of PUGs
The ultimate reason that may cause biased estimation of the
null distribution is that the PUGs in the above r-test scheme
are estimated from the same ranking information as that
being used for the calculation of the test statistics. A
solution is to select a group of PUGs that are an unbiased
sample from the U
G
. This is a big challenge because
estimating the rank position distribution of U
G
is the
question itself.
When the ranking method RM is a multivariate one such
as SVM-based methods, the ranking of the genes will not
directly depend on the differences of single genes between
the classes. We therefore can use a univariate statistic such
as the t-test to select a group of nondifferentially expressed
genes as the PUGs since these genes will have a high
probability of not being informative as they are basically the
same in the two classes. This selection will be less correlated
with the ranking by RM. Applying a threshold
on the
t-test p-value p
t
, we select the PUG set U
0
G
as:
U
0
G
4
g
j
jg
j
2 G; p
t
g
j
10
and estimate the null position distribution according the
ranking of the U
0
G
genes by RM in the resampled data:
^
P
t
r
jg 2 U
G
P
histogram
r
l
j
jg
j
2 U
0
G
:
11
The position p*-value of a gene ranked on average at r
i
is
calculated as:
^
p
r
i
^
P
t
r
r
i
jg 2 U
G
P
histogram
r
l
j
r
i
jg
j
2 U
0
G
:
12
For the convenience of discussion, we call this strategy the
tr-test and call the primary r-test defined by (7) the pr-test.
We view the pr-test, mr-test, and tr-test as three specific
methods under the general r-test scheme.
It should be noted that if the ranking produced by RM is
highly correlated with t-test ranking, the result of tr-test will
be close to that of the original pr-test. On the other hand,
since insignificant genes evaluated individually may not
necessarily be uninformative when combined with certain
other genes, the PUGs selected by (10) may include
informative genes for RM. Therefore, the estimated null
distribution may bias toward the left end in some situations,
making the results overconservative. However, in the
experiments described below, it is observed that the tr-test
results are not sensitive to changes in the p-value cut-offs
used for selecting PUGs (10), which is an implication that
the method is not very biased.
EXPERIMENTS WITH SVM ON REAL AND SIMULATED DATA
5.1
r-Test with SVM Gene Ranking
Due to the good generalization ability of support vector
machines (SVM) [15], they are regarded as one of the best
multivariate algorithms for classifying microarray data [9],
[10], [16]. In the experiments for r-test in this work, we
adopted linear SVM as the ranking machine RM. The linear
SVM is trained with all genes in the data set, producing the
discriminate function
f
x w
x
b ;
13
where w P
n
i
1
i
y
i
x
i
and
i
are the solutions of the
following quadratic programming problem:
L
p
1
2 w
k k
2
X
n
i
1
i
y
i
x
i
w
b X
n
i
1
i
:
14
Following [10], the contribution of each gene in the classifier
can be evaluated by
ZHANG ET AL.: SIGNIFICANCE OF GENE RANKING FOR CLASSIFICATION OF MICROARRAY SAMPLES
315
DL
p
1=2 @
2
L
p
@w
2
i
Dw
i
2
w
i
2
15
and, thus, the genes are ranked by w
i
2
. There are other
ways of assessing the relative contribution of the genes in a
SVM classifier [17], but, since the scope of this paper is not
to discuss the ranking method, we adopt the ranking
criterion given in (15) here. The ranking only reflects the
relative importance of the genes in the classifier, but cannot
reveal how important each gene is. The r-test converts the
ranking to position p-values (or p*-values) to evaluate the
significance.
5.2
Data Sets
Experiments were done on six microarray data sets: three
real data sets and three simulated data sets. The leukemia
data set [1] contains the expression of 7,129 genes (probe
sets) of 72 cases, 47 of them are of the ALL class and 25 are
of the AML class. The colon cancer data set [18] contains
2,000 genes of 62 cases, among which 40 are from colon
cancers and 22 from normal tissues. These two data sets
have been widely used as benchmark sets in many
methodology studies. Another data set used in this study
is a breast cancer data set [19] containing 12,625 genes
(probe sets) of 85 cases. The data set is used to study the
classification of two subclasses of breast cancer. Forty-two
of the cases are of class 1 and 43 are of class 2.
Simulated data sets were generated to investigate the
properties of the methods in different situations. The first
case is for an extreme situation where none of the genes are
informative. The simulated data set contains 1,000 genes
and 100 cases. The expression values of the genes are
independently generated from normal distributions with
randomized means and variations in a given range. The
100 cases are generated with the same model, but are
assigned arbitrarily to two fake classes (50 cases in each
class). So, the two classes are, in fact, not separable and all
the genes are uninformative. We refer to this data set as the
"fake-class" data set in the following description.
Each of the other two simulated data sets also contains
1,000 genes and 100 cases of two classes (50 cases in each
class). In one data set (we call it "simu-1"), 700 of the genes
follow N(0, 1) for both classes and the 300 genes follow
N(0.25, 1) for class 1 and N(-0.25, 1) for class 2. In the other
data set (we call it "simu-2"), 700 of the genes follow N(0, 1)
for both classes and the 300 genes follow N(0.5, 1) for class 1
and N(-0.5, 1) for class 2. With these two simulated data
sets, we hope to mimic situations where there are weak and
strong classification signals in the data.
All the data sets except simu-1 and simu-2 were
standardized to 0-mean and standard deviation 1 first
across the cases and then across the genes. This is to prevent
possible bias in the ranking affected by the scaling. In
practical investigations, this step might not be needed or
might need to be done in some other way according to the
specific situation of the data and the specific ranking
methods to be adopted.
The six data sets used in our experiments represent
different levels of separability of the investigated classes.
For the leukemia data set, almost perfect classification
accuracy has been achieved [1], [9], [10], so it represents a
relatively easy classification task. For the colon cancer data
set, the samples can still be well separated, but with some
errors [10], [18]. The two subclasses studied in the breast
cancer data set are hardly separable as observed in this data
set, but it is believed that there could be some degree of
separability [20], [21]. The fake-class simulation represents a
situation where the two classes are completely nonseparable
and the simu-1 and simu-2 simulation represents an
ideal situation where separation is defined on a subset of
the genes and the uninformative genes are i.i.d. To check
the classification accuracy that can be achieved on these
data sets, we randomly split them into independent training
and test sets and applied linear SVM on them. These
experiments were done 200 times for each data set and the
classification accuracy obtained at different gene selection
levels is summarized in Table 1. It can be seen that the
accuracies are consistent with the reports in the literature
and with the design of the simulations. (Note that the error
rates reported here are independent test results based on
only half of the samples for training, so they are larger than
the cross-validation errors reported elsewhere. The scope of
this paper is not to improve or discuss classification
accuracy.)
316
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,
VOL. 3,
NO. 3,
JULY-SEPTEMBER 2006
TABLE 1
Separability of the Classes of the Six Data Sets
5.3
Number of Significant Genes According to the
mr-Test and tr-Test
We systematically experimented with the SVM-based
pr-test, mr-test, and tr-test methods on the six data sets
and studied the number of genes claimed as significantly
informative with each method at various significant levels.
The results of the pr-test are affected by different choices of
the number k of selected PUGs (data not shown), indicating
that the pr-test can be very biased unless we know the
accurate number of informative genes. Therefore, we focus
on the mr-test and tr-test in the following discussion.
Table 2 shows the number of significant genes according
to the mr-test at different p*-value levels, with different
choices of ks on the six data sets. Comparing with the pr-test
results, the mr-test results are less sensitive to changes in
the number k. This is especially true when there are ideal
classification signals, as in the simu-2 data, where we can
see a more than 10-fold change of k causes only little
variance in the estimated gene numbers. With p*-value
levels from 0.001 to 0.1, the estimated significant gene
numbers are all around the correct number (300). The
claimed significant genes are all those true informative
genes in the model when the estimated genes are less than
300. For the situations where the number of estimated
informative genes is larger than 300, all the true informative
genes are discovered. When the data are less ideal, we see
that the results are stable within a smaller variation of k.
More experiment results with larger variations in the choice
of k are provided in the supplemental material, which can
be found on the Computer Society Digital Library at http://
computer.org/tcbb/archives.htm. From Table 2, it can also
be observed that the number of significant genes is not
directly correlated with the classification accuracy. For
example, the breast cancer data and fake-class data both
look nonseparable according to the classification errors
(Table 1), However, for the breast cancer data, more than
200 genes are identified as significantly informative among
the 12,625 genes (
1:6%
) at the p*-value = 0.01 level, but,
for the fake-class data, this number is only about 0.4 percent
of the 1,000 genes.
Results of the tr-test with different t-test p-value cut-offs
are shown in Table 3. It can be seen that different cut-offs
result in different numbers of PUGs, but the variation in
estimated position p*-values due to PUG number difference
is even smaller than in the mr-test. This implies that the
tr-test results are not biased by the selection of PUGs since,
if the PUG selection was biased, different numbers of PUGs
at t-test p-value cut-offs would have caused different
degrees of bias and the results would have varied greatly.
Comparing between Table 2 and Table 3, as well as the
results in the supplemental material, which can be found on
the Computer Society Digital Library at http://computer.
org/tcbb/archives.htm, we observe that, for the mr-test,
although there is a range of k for each data set in which the
results are not very sensitive to variations of k, this range
can be different with different data sets. On the other hand,
for the tr-test, within the same ranges of cut-off t-test p-values
, results on all the data sets show good consistency
with regard to variations in the cut-off value. This makes
the tr-test more applicable since users do not need to tune
the parameter specifically to each data set.
Comparing the number of genes selected by the tr-test
and mr-test (Table 2 and Table 3), it is obvious that the
tr-test is more stringent and selects much fewer genes than
ZHANG ET AL.: SIGNIFICANCE OF GENE RANKING FOR CLASSIFICATION OF MICROARRAY SAMPLES
317
TABLE 2
The Number of Genes Selected at Various r-Test p*-Value Levels with SVM
the mr-test on the real data sets. The differences are smaller
on the simulated data. Similarly to the results of the mr-test,
almost all the informative genes in simu-2 data can be
recovered at p*-value levels from 0.001 to 0.05 and there are
only a very few false-positive genes (e.g., the 307 genes
selected at p*-value = 0.05 contains all the 300 true-positive
genes and seven false-positive genes). This shows that the
SVM method is good in both sensitivity and specificity in
selecting the true informative genes for such ideal case, and
both of the two r-test methods can detect the correct
number of informative genes at a wide range of significance
levels. For simu-1 data, not all the informative genes can be
recovered in the experiments. This reflects the fact of the
large overlap of the two distributions in this weak model.
Many of original 300 "informative" genes are actually not
statistically significant in the contexts of both univariate
methods and multivariate methods.
For the real data sets, there are no known answers for the
"true" number of informative genes. The mr-test uses the tail
in the ranking list to estimate the null distribution for
assessing the significance of the genes on the top of the list,
therefore there is a higher possibility of the p*-values being
underestimated, although this has been partially compen-sated
by using the average rank positions. Thus, the number
of genes being claimed significant by mr-test might be
overestimated. In this sense, the tr-test scheme provides a
more unbiased estimation of the null distribution, which is
supported by the decreased sensitivity to PUG numbers.
With the tr-test, at the 0.05 p*-value level, we get about
410 significant genes from the 7,129 genes (5.75 percent) in the
leukemia data. On the other two real data sets, the results tend
to be too conservative: about 13 out of 2,000 genes
(0.65 percent) in the colon cancer data and 50 out of
12,625 genes (0.4 percent) in the breast cancer data are
claimed as significant at this level.
It should be noted that the PUGs selected according to
the t-test may contain informative genes for SVM, which
considers the combined effects of genes. This will cause the
number of genes called significant by the tr-test to be
underestimated. This is especially true for data sets in
which the major classification signal exists in the combinatorial
effects of genes instead of differences in single genes.
The correct answer may be somewhere between the two
estimations of the tr-test and mr-test. When the signal is
strong, the two estimations will be close as we see in the
simu-2 data. In practice, one can choose which one to use
according to whether the purpose is to discover more
possibly informative genes or to discover a more manageable
set of significant genes for follow-up investigations.
DISCUSSION
Statistical hypothesis testing is a fundamental framework for
scientific inference from observations. Unfortunately, existing
hypothesis testing methods are not sufficient to handle
high-dimensional multivariate analysis problems arising
from current high-throughput genomic and proteomic
studies. Many new data mining techniques have been
developed both in statistics and in the machine learning
field. These methods are powerful in analyzing complicated
high-dimensional data and helped greatly in functional
genomics and proteomics studies. However, the analysis of
the statistical significance of data mining results has not been
paid enough attention. One reason might be that many
methods are rooted in techniques aimed at solving problems
in engineering and technological applications rather than in
scientific discoveries. As an example, many machine-learning
-based gene selection and classification methods may
achieve very good performance in solving the specific
classification problems, but the results are usually of a
318
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,
VOL. 3,
NO. 3,
JULY-SEPTEMBER 2006
TABLE 3
The Number of Genes Selected at Various Position p-Value Levels by tr-Test with SVM
"black-box" type and judging the significance of the features
being used for the classification was usually not deemed
important. This fact compromises their further contribution
in helping biologists to understand the mechanisms underlying
the investigated disease classification.
This paper raises the problem of the significance of gene
rankings in microarray classification study and proposes a
solution strategy called the r-test that converts the ranking
of genes obtained with any method to position p-values
(p*-values) that reflect the significance of the genes being
informative. The concept of this question is important and
the formulation and solution are challenging for several
reasons as addressed in the paper. First of all, the definition
of a gene being informative to the classification may not yet
be completely clear for many classification methods. Even
under the same criterion, there may not be a clear boundary
between informative and uninformative genes. A biological
status may be affected by several genes with different levels
of contribution and it may affect the expression of many
other genes. Differences between individuals and instrumental
noises may make the genes that have no relation
with the studied biological process show some relevancy in
the limited samples. All these (and other) complexities
make it hard to mathematically model microarray data. We
propose the r-test methods based on intuitive reasoning
under certain assumptions about the nature of the data. As
shown in the experiments, the methods provide reasonable
solutions, but the decision by the mr-test and tr-test method
can be very different for some situations. Theoretically,
rigorous methods are still to be developed.
Under the proposed r-test framework, the key issue is
the choice of putative uninformative genes or PUGs. Since
the null distribution has to be estimated from the data
themselves, avoiding bias in the estimation is the most
challenging task. Besides the methods used by the mr-test
and tr-test, we have also tried several other ways to tackle
the problem, including selecting the PUGs according to the
distribution of the ranks of all genes in the resample
experiments, deciding the number of PUGs recursively
according to the rank with an EM-like strategy, selection of
an independent set of PUGs by fold-change, etc. Different
resampling strategy has also been experimented with.
Among these efforts, the reported mr-test and tr-test give
the most satisfactory results. They both perform perfectly
on ideal simulations. For practical cases, the mr-test has a
tendency to be overoptimistic by claiming more significant
genes and the tr-test has a tendency to be conservative by
approving only a small number of significant genes. Note
that both r-test schemes do not change the ranking itself;
therefore, it is the role of the classification and gene
selection method (the RM) to guarantee that the ranking
itself is reasonable for the biological investigation. The r-test
only helps to decide on the number of genes to be selected
from the list at given significance levels. Since there is
currently no theoretical solution to completely avoid
estimation bias, one can make a choice between mr-test
and tr-test results by balancing between the two opposite
trends of possible biases according to the particular
biological problem at hand.
ACKNOWLEDGMENTS
The authors would like to thank Drs. J.D. Iglehart and A.L.
Richardson for providing them with their microarray data
for the experiments. They would also like to thank the
editor and reviewers for their valuable suggestions that
contributed a lot to the work. They thank Dustin Schones
for helping to improve their writing. This work is supported
in part by NSFC projects 60275007, 60234020, and the
National Basic Research Program (2004CB518605) of China.
This work was performed while Chaolin Zhang was with
the MOE Key Laboratory of Bioinformatics/Department of
Automation, Tsinghua University, Beijing. Chaolin Zhang
and Xuesong Lu contributed equally in this work and
should be regarded as joint authors. The corresponding
author is Xuegong Zhang.
REFERENCES
[1]
T.R. Golub, D.K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek,
J.P. Mesirov, H. Coller, M.L. Loh, J.R. Downing, M.A. Caligiuri,
C.D. Bloomfield, and E.S. Lander, "Molecular Classification of
Cancer: Class Discovery and Class Prediction by Gene Expression
Monitoring," Science, vol. 286, no. 5439, pp. 531-537, 1999.
[2]
C.-A. Tsai, Y.-J. Chen, and J.J. Chen, "Testing for Differentially
Expressed Genes with Microarray Data," Nuclear Acids Research,
vol. 31, no. 9, p. e52, 2003.
[3]
M.S. Pepe, G. Longton, G.L. Anderson, and M. Schummer,
"Selecting Differentially Expressed Genes from Microarray Experiments
," Biometrics, vol. 59, no. 1, pp. 133-142, 2003.
[4]
P. Broberg, "Statistical Methods for Ranking Differentially
Expressed Genes," Genome Biology, vol. 4, no. 6, p. R41, 2003.
[5]
W. Pan, "A Comparative Review of Statistical Methods for
Discovering Differentially Expressed Genes in Replicated Microarray
Experiments," Bioinformatics, vol. 18, no. 4, pp. 546-554, 2002.
[6]
S. Ramaswamy, K.N. Ross, E.S. Lander, and T.R. Golub, "A
Molecular Signature of Metastasis in Primary Solid Tumors,"
Nature Genetics, vol. 33, no. 1, pp. 49-54, 2003.
[7]
L.J. van 't Veer, H. Dai, M.J. van de Vijver, Y.D. He, A.A.M. Hart,
M. Mao, H.L. Peterse, K. van der Kooy, M.J. Marton, A.T.
Witteveen, G.J. Schreiber, R.M. Kerkhoven, C. Roberts, P.S.
Linsley, R. Bernards, and S.H. Friend, "Gene Expression Profiling
Predicts Clinical Outcome of Breast Cancer," Nature, vol. 415,
no. 6871, pp. 530-536, 2002.
[8]
M. Xiong, X. Fang, and J. Zhao, "Biomarker Identification by
Feature Wrappers," Genome Research, vol. 11, no. 11, pp. 1878-1887,
2001.
[9]
X. Zhang and H. Ke, "ALL/AML Cancer Classification by Gene
Expression Data Using SVM and CSVM Approach," Proc. Conf.
Genome Informatics, pp. 237-239, 2000.
[10]
I. Guyon, J. Weston, S. Barnhill, and V. Vapnik, "Gene Selection
for Cancer Classification Using Support Vector Machines,"
Machine Learning, vol. 46, pp. 389-422, 2002.
[11]
C. Furlanello, M. Serafini, S. Merler, and G. Jurman, "Entropy-Based
Gene Ranking without Selection Bias for the Predictive
Classification of Microarray Data," BMC Bioinformatics, vol. 4,
no. 1, p. 54, 2003.
[12]
H. Yu, J. Yang, W. Wang, and J. Han, "Discovering Compact and
Highly Discriminative Features or Feature Combinations of Drug
Activities Using Support Vector Machines," Proc. 2003 IEEE
Bioinformatics Conf. (CSB '03), 2003.
[13]
J.D. Storey and R. Tibshirani, "Statistical Significance for Genome
Wide Studies," Proc. Nat'l Academy of Science USA, vol. 100, no. 16,
pp. 9440-9445, 2003.
[14]
R.R. Sokal and F.J. Rohlf, Biometry. San Francisco: Freeman, 1995.
[15]
V. Vapnik, The Nature of Statistical Learning Theory. Springer-Verlag
, 1995.
[16]
S. Ramaswamy, P. Tamayo, R. Rifkin, S. Mukherjee, C.-H. Yeang,
M. Angelo, C. Ladd, M. Reich, E. Latulippe, J.P. Mesirov, T.
Poggio, W. Gerald, M. Loda, E.S. Lander, and T.R. Golub,
"Multiclass Cancer Diagnosis Using Tumor Gene Expression
Signatures," Proc. Nat'l Academy of Sciences, vol. 98, no. 26,
pp. 15149-15154, 2001.
ZHANG ET AL.: SIGNIFICANCE OF GENE RANKING FOR CLASSIFICATION OF MICROARRAY SAMPLES
319
[17]
X. Zhang and W.H. Wong, "Recursive Sample Classification and
Gene Selection Based on SVM: Method and Software Description
," technical report, Dept. of Biostatistics, Harvard School of
Public Health, 2001.
[18]
U. Alon, N. Barkai, D.A. Notterman, K. Gish, S. Ybarra, D. Mack,
and A.J. Levine, "Broad Patterns of Gene Expression Revealed by
Clustering Analysis of Tumor and Normal Colon Tissues Probed
by Oligonucleotide Arrays," Proc. Nat'l Academy of Sciences, vol. 96,
no. 12, pp. 6745-6750, 1999.
[19]
Z.C. Wang, M. Lin, L.-J. Wei, C. Li, A. Miron, G. Lodeiro, L.
Harris, S. Ramaswamy, D.M. Tanenbaum, M. Meyerson, J.D.
Iglehart, and A. Richardson, "Loss of Heterozygosity and Its
Correlation with Expression Profiles in Subclasses of Invasive
Breast Cancers," Cancer Research, vol. 64, no. 1, pp. 64-71, 2004.
[20]
E. Huang, S.H. Cheng, H. Dressman, J. Pittman, M.H. Tsou, C.F.
Horng, A. Bild, E.S. Iversen, M. Liao, and C.M. Chen, "Gene
Expression Predictors of Breast Cancer Outcomes," The Lancet,
vol. 361, no. 9369, pp. 1590-1596, 2003.
[21]
M. West, C. Blanchette, H. Dressman, E. Huang, S. Ishida, R.
Spang, H. Zuzan, J.A. Olson Jr., J.R. Marks, and J.R. Nevins,
"Predicting the Clinical Status of Human Breast Cancer by Using
Gene Expression Profiles," Proc. Nat'l Academy of Sciences, vol. 98,
no. 20, pp. 11462-11467, 2001.
Chaolin Zhang received the BE degree from
the Department of Automation at Tsinghua
University, Beijing, China, in 2002. From 2002
to 2004, he worked as a graduate student on
machine learning applications in microarray data
analysis and literature mining at the MOE Key
Laboratory of Bioinformatics, Tsinghua University
. He is now a PhD student at Cold Spring
Harbor Laboratory and the Department of
Biomedical Engineering, the State University of
New York at Stony Brook.
Xuesong Lu received the BE degree from the
Department of Automation, Tsinghua University,
Beijing, China, in 2001. He is currently a PhD
candidate in the Department of Automation and
the MOE Key Laboratory of Bioinformatics at
Tsinghua University, Beijing, China. His research
interests include microarray data mining,
gene network modeling, and literature mining.
Xuegong Zhang received the PhD degree in
pattern recognition and intelligent systems from
Tsinghua University, Beijing, China, in 1994. He
is currently a professor in the Department of
Automation and the MOE Key Laboratory of
Bioinformatics at Tsinghua University. His research
interests include machine learning and
pattern recognition, bioinformatics, computa-tional
genomics, and systems biology.
.
For more information on this or any other computing topic,
please visit our Digital Library at www.computer.org/publications/dlib.
320
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,
VOL. 3,
NO. 3,
JULY-SEPTEMBER 2006 | Significance of gene ranking;gene selection;microarray data analysis;classification |
179 | Simplifying Flexible Isosurfaces Using Local Geometric Measures | The contour tree, an abstraction of a scalar field that encodes the nesting relationships of isosurfaces, can be used to accelerate isosurface extraction, to identify important isovalues for volume-rendering transfer functions, and to guide exploratory visualization through a flexible isosurface interface. Many real-world data sets produce unmanageably large contour trees which require meaningful simplification. We define local geometric measures for individual contours, such as surface area and contained volume, and provide an algorithm to compute these measures in a contour tree. We then use these geometric measures to simplify the contour trees, suppressing minor topological features of the data. We combine this with a flexible isosurface interface to allow users to explore individual contours of a dataset interactively. | Introduction
Isosurfaces, slicing, and volume rendering are the three main techniques
for visualizing three-dimensional scalar fields on a two-dimensional
display. A recent survey [Brodlie and Wood 2001] describes
the maturation of these techniques since the mid 1980s. For
example, improved understanding of isosurfaces has produced robust
definitions of watertight surfaces and efficient extraction methods
. We believe that the same improved understanding and structuring
leads to new interfaces that give the user better methods to
select isosurfaces of interest and that provide a rich framework for
data-guided exploration of scalar fields.
Although key ideas in this paper apply to both isosurfaces and volume
rendering, the immediate application is to isosurface rendering
. An isosurface shows the surface for a fixed value (the isovalue
) of the scalar field and is the 3D analogue of equal-height
contour lines on a topographic map. The contour tree represents
the nesting relationships of connected components of isosurfaces,
which we call contours, and is thus a topological abstraction of a
scalar field. Since genus changes to surfaces do not affect the nesting
relationship, they are not represented in the contour tree. Our
contribution is to combine the flexible isosurface interface [Carr
and Snoeyink 2003] with online contour tree simplification guided
by geometric properties of contours to produce a tool for interactive
exploration of large noisy experimentally-sampled data sets.
An additional contribution is to draw attention to other potential
applications of simplified contour trees, such as detail-preserving
denoising, automated segmentation, and atlasing.
Figure 1 shows a comparison between a conventional isosurface
and a flexible isosurface extracted from the same data set after contour
tree simplification. On the left, the outermost surface (the
skull) occludes other surfaces, making it difficult to study structures
inside the head. Moreover, the contour tree for this data set has over
1 million edges, making it impractical as a visual representation.
Figure 2: The topographic map (2-d scalar field), surface rendering, and contour tree for a volcanic crater lake with a central island. A: a
maximum on the crater edge; B: maximum of island in the lake; F: lake surface; C and D: saddle points.
On the right is a flexible isosurface constructed using a simplified
contour tree, laid out and coloured to emphasize the structure of the
data set. Of particular interest is that there are no "magic numbers"
embedded in the code. Instead, the surfaces shown were chosen
directly from the simplified contour tree during exploration of this
data set, with the level of simplification being adjusted as needed.
The remainder of this paper is as follows. Section 2 reviews work
on contour trees in visualization. Section 3 shows how to simplify
the contour tree, and the effects on the data. Section 4 shows how to
compute local geometric measures efficiently to guide simplification
. Section 5 gives implementation details, and Section 6 reports
results. Finally, Section 7 gives possible future extensions.
Related Work
Most of the relevant work deals with a topological structure called
the contour tree that is becoming increasingly important in visualization
. Section 2.1 reviews the contour tree and algorithms to
compute it. Section 2.2 then reviews visualization tools that use the
contour tree, while Section 2.3 reviews work on topological simplification
and on efficient computation of geometric properties.
2.1
The Contour Tree
For a scalar field f : IR
3
IR, the level set of an isovalue h is the
set L
(h) = {(x,y,z) | f (x,y,z) = h}. A contour is a connected component
of a level set. As h increases, contours appear at local minima
, join or split at saddles, and disappear at local maxima of f .
Shrinking each contour to a single point gives the contour tree,
which tracks this evolution. It is a tree because the domain IR
3
is simply-connected; in more general domains we obtain the Reeb
graph [Reeb 1946], which is used in Morse theory [Matsumoto
2002; Milnor 1963] to study the topology of manifolds.
Figure 2 shows a 2-dimensional scalar field describing a volcanic
crater lake with a central island. The contour tree of this field is
an abstract, but meaningful, depiction of the structure of all local
maxima, minima, and saddle points, and gives clues to interesting
contours. Individual contours are represented uniquely as points on
the contour tree. For example, the isolines c
1
, c
2
, and c
3
are all at
2000m, but each has a unique location on the contour tree.
The contour tree has been used for fast isosurface extraction [van
Kreveld et al. 1997; Carr and Snoeyink 2003], to guide mesh simplification
[Chiang and Lu 2003], to find important isovalues for
transfer function construction [Takahashi et al. 2004b], to compute
topological parameters of isosurfaces [Kettner et al. 2001], as an
abstract representation of scalar fields [Bajaj et al. 1997], and to
manipulate individual contours [Carr and Snoeyink 2003].
Algorithms to compute the contour tree efficiently in three or more
dimensions have been given for simplicial meshes [van Kreveld
et al. 1997; Tarasov and Vyalyi 1998; Carr et al. 2003; Chiang
et al. 2002; Takahashi et al. 2004b] and for trilinear meshes [Pascucci
and Cole-McLaughlin 2002]. Much of this work focusses
on "clean" data from analytic functions or numerical simulation
see for example [Bajaj et al. 1997; Takahashi et al. 2004b]. All
of the topology in this data is assumed to be important and significant
effort is expended on representing it accurately using trilinear
interpolants [Pascucci and Cole-McLaughlin 2002] and topology-preserving
simplifications [Chiang and Lu 2003].
In contrast, we are interested in noisy experimentally-acquired data
such as medical datasets. We expect to discard small-scale topological
features so that we can focus on large-scale features. We have
therefore chosen to work with the well-known Marching Cubes
cases [Lorenson and Cline 1987; Montani et al. 1994], and with approximate
geometric properties. This paper does not turn on these
choices, however, and can also be applied to trilinear interpolants
and exact geometric properties.
2.2
Flexible Isosurfaces
The contour spectrum [Bajaj et al. 1997] uses the contour tree to
represent the topology of a field, alongside global measures of level
sets such as surface area and enclosed volume. In contrast, the flexible
isosurface interface [Carr and Snoeyink 2003] uses the contour
tree actively instead of passively. The user selects an individual
contour from the contour tree or from the isosurface display, then
manipulates it. Operations include contour removal and contour
evolution as the isovalue is changed, using the contour tree to track
which contours to display. This interface depends on attaching isosurface
seeds called path seeds to each edge of the contour tree so
that individual contours can be extracted on demand.
A major disadvantage of both these interfaces is that contour trees
with more than a few tens of edges make poor visual abstractions.
A principal contribution of this paper to simplify the contour tree
while preserving the exploratory capabilities of the flexible isosurface
. This requires that each point in a simplified contour tree represents
an extractable contour. Moreover, extracted contours must
evolve as smoothly as possible when the isovalue is adjusted.
We satisfy these constraint with simplifications that have pre-dictable
effects on the scalar field and geometric measures that iden-498
tify unimportant contour tree edges for simplification
2.3
Simplification and Geometric Measures
The distinction between this paper and other work that simplifies
contour trees or Reeb graphs is our emphasis on using tree structure
for local exploration. [Takahashi et al. 2004a] simplify the contour
tree by replacing three edges at a saddle point with a single new
edge, based on the height of the edge. [Takahashi et al. 2004b] use
the approximate volume of the region represented by the subtree
that is discarded. Saddles are processed until only a few remain,
then a transfer function is constructed that emphasizes the isovalues
of those saddles. Our simplification algorithm extends this work to
preserve local information such as isosurface seeds and to compute
arbitrary geometric measures of importance. We also describe the
effects of simplification on the scalar field.
Since removing a leaf of the contour tree cancels out a local ex-tremum
with a saddle, this form of simplification can be shown to
be equivalent to topological persistence [Edelsbrunner et al. 2003;
Edelsbrunner et al. 2002; Bremer et al. 2003] if the geometric measure
used is height. For other measures, such as volume or hypervolume
, the method described in this paper is necessary to define
these properties, but thereafter, the process can optionally be described
in terms of persistence.
Moreover, work on persistence has focussed on the Morse complex
, which is difficult to compute and segments data according to
the gradient of the field. When the boundary of an object such as
an organ is better described by a contour than by drainage, contour
trees are more directly applicable than Morse complexes, and the
additional overhead of working with the Morse complex is unnecessary
.
[Hilaga et al. 2001] have shown how to simplify the Reeb graph by
progressive quantization of the isovalue to obtain a multi-resolution
Reeb graph. This suffers from several drawbacks, in particular that
it is strictly tied to a function value which is treated as height (or
persistence). Extension to geometric measures of importance such
as volume or hypervolume is therefore problematic. Moreover, the
quantization used imposes serious restrictions on isosurface generation
and the level of simplification, as well as generating artifacts
related to the quantization. In particular, we note that this quantization
process limits potential simplification to at most as many
levels as there are bits in each input sample. Finally, this method
is relatively slow: 15s is claimed for a 2-manifold input mesh with
10,000 vertices: extensions to 10,000,000+ sample volumetric data
have not yet been published.
Work also exists on computing geometric measures efficiently in
large data sets. [Bentley 1979] defined problems to be decomposable
if their solution could be assembled from the solutions of an
arbitrary decomposition into subproblems. Decomposability has
been used for a variety of problems, including computation of geometric
properties of level sets [Bajaj et al. 1997] and extraction of
isosurfaces [Lorenson and Cline 1987]. We use decomposability in
Section 4 to compute local geometric measures.
Contour Tree Simplification
Given a contour tree and a scalar field, we apply graph simplification
to the contour tree. This simplification can then be carried
back to simplify the input data. Alternately, we can use the simplified
contour tree to extract the reduced set of isosurfaces that would
result if we had simplified the data. In this section, we describe
the contour tree structure, the simplification operators, and the algorithms
for simplification and isosurface extraction.
3.1
Contour Tree Structure
A contour tree is the result of contracting every contour to a point.
We use a simple tree structure in which every vertex is assigned a
y-coordinate, and every edge is associated with the set of contours
between its vertices. We store path seeds for generating individual
contours, as in [Carr and Snoeyink 2003]. That is, we store
a pointer to a monotone path that intersects all contours along the
edge, which then serves as a seed to generate any given contour. In
this section, we assume that each edge has a simplification value
(weight) that indicates the edge's priority. Low priority edges are
good candidates for simplification.
3.2
Basic Simplification Operations
We simplify the contour tree with two operations: leaf pruning and
vertex reduction. Leaf pruning removes a leaf of the tree, reducing
the complexity of the tree, as shown in Figure 3, where vertex 80 is
pruned from the tree on the left to produce the tree in the middle.
Vertex reduction chooses a vertex with one neighbor above and one
below, and deletes the vertex without changing the essential structure
of the contour tree. This is also illustrated in Figure 3, where
vertex 50 has been removed from the tree in the middle to produce
the tree on the right. Since vertex reductions do not change the essential
structure of the contour tree, we prefer them to leaf prunes.
Also, pruning the only up- or down- edge at a saddle is prohibited
to preserve the edge for a later vertex reduction. It is clear that these
operations can simplify the tree to any desired size.
We can also think of these operations as having well-defined effects
on the underlying scalar field: pruning a leaf corresponds to levelling
off a maximum or minimum, while vertex reduction requires
no changes.
As an example, in Figure 3 we show the result of leaf-pruning vertex
80 and edge 80
- 50 from the tree. Since 80 - 50 represents the
left-hand maximum, pruning it flattens out the maximum, as shown
in the middle terrain. Similarly, the right-hand image shows the
results of reducing vertex 50 after the leaf prune. The edges incident
to vertex 50 in the tree correspond to the regions above and
below the contour through vertex 50. Removing vertex 50 merely
combines these two regions into one.
The fact that simplification operations can be interpreted as modifying
the scalar field suggests that one way to assess the cost of
an operation is to measure geometric properties of the change. We
show how this can be done efficiently in Section 4.
3.3
Simplification Algorithm
To simplify the contour tree, we apply the following rules:
1. Always perform vertex reduction where possible.
2. Always choose the least important leaf to prune.
3. Never prune the last up- or down- leaf at an interior vertex.
We implement this with a priority queue to keep track of the leaves
of the tree with their associated pruning cost. We assume that for
each edge e of the tree, we know two costs: up
(e) for pruning
the edge from the bottom up: i.e. collapsing the edge to its upper
vertex, and down
(e) for the cost of pruning the edge from the
499
0
90
0
90
0
90
50
0
90
50
0
90
50
80
0
90
50
80
Leaf 80 is pruned
Vertex 50 is reduced
Figure 3: Leaf Pruning Levels Extrema; Vertex Reduction Leaves Scalar Field Unchanged
top downwards. We add each leaf to the priority queue, with priority
of up
(e) for a lower leaf and down(e) for an upper leaf. We then
repeatedly remove the lowest cost leaf edge from the priority queue
and prune it. If this pruning causes a vertex to become reducible,
we do so immediately.
When a vertex is reduced, two edges e
1
and e
2
are merged into a
simplified edge d. The cost of pruning d is based on the costs of
the two reduced edges. Since up
(d) is the cost of pruning d upwards
, we set it to up
(e
1
), the cost of pruning the upper edge upwards
. Similarly, we set down
(d) to down(e
2
), the cost of pruning
the lower edge downwards. If d is a leaf edge, we add it to the priority
queue. To simplify queue handling, we mark the reduced edges
for lazy deletion. When a marked edge reaches the front of the priority
queue, we discard it immediately. Similarly, when the edge
removed from the queue is the last up- or down- edge at its interior
vertex, we discard it, preserving it for a later vertex reduction.
A few observations on this algorithm: First, any desired level of
simplification of the tree can be achieved in a number of queue
operations linear in t, the size of the original tree. Since at least half
the nodes are leaves, this bound is tight. And if the contour tree is
stored as nodes with circular linked lists of upwards and downwards
edges, every operation except (de)queueing takes constant time. As
a result, the asymptotic cost of this algorithm is dominated by the
O
(t log(t)) cost of maintaining the priority queue.
Second, the simplified contour tree can still be used to extract isosurface
contours. Vertex reductions build monotone paths corresponding
to the simplified edges, while leaf prunes discard entire
monotone paths. Thus, any edge in a simplified contour tree corresponds
to a monotone path through the original contour tree. To
generate the contour at a given isovalue on a simplified edge, we
perform a binary search along the contour tree edges that make up
the monotone path for that simplified edge. This search identifies
the unique contour tree edge that spans the desired isovalue, and we
use the path seed associated with that edge to generate the contour.
Third, we extract contours from seeds as before. Instead of simplifying
individual contours, we reduce the set of contours that can be
extracted. Surface simplification of contours is a separate task.
Finally, up
(e) and down(e) actually need not be set except at leaves
of the tree. As a leaf is pruned and vertex reduced, new values can
be computed using information from the old nodes and edges. It is
not hard to show by induction that any desired level of simplification
of the tree can be achieved. And, since leaf pruning and vertex
reduction are the only two operations, the net result can also be a
meaningful simplification of the underlying scalar field, assuming
that a reasonable geometric measure is used to guide the simplification
. We therefore next discuss geometric measures.
Local Geometric Measures
[Bajaj et al. 1997] compute global geometric properties, and display
them alongside the contour tree in the contour spectrum. [Pascucci
and Cole-McLaughlin 2002] propagate topological indices called
the Betti numbers along branches of the contour tree, based on previous
work by [Pascucci 2001]. We bring these two ideas together
to compute local geometric measures for individual contours.
In 2D scalar fields, the geometric properties we could compute
include the following contour properties: line length (perimeter),
cross-sectional area (area of region enclosed by the contour), volume
(of the region enclosed), and surface area (of the function over
the region). In 3D scalar fields, there are analogous properties that
include isosurface area, cross-sectional volume (the volume of the
region enclosed by the isosurface), and hypervolume (the integral
of the scalar field over the enclosed volume).
Figure 4: Contours Sweeping Past a Saddle Point
Consider a plane sweeping through the field in Figure 2 from high
to low isovalues. At any isovalue h, the plane divides the field into
regions above and below the plane. As the isovalue decreases, the
region above the plane grows, sweeping past the vertices of the
mesh one at a time. Geometric properties of this region can be
written as functions of the isovalue h. Such properties are decomposable
over the cells of the input data for each cell we compute
a piecewise polynomial function, and sum them to obtain a piecewise
polynomial function for the entire region. [Bajaj et al. 1997]
compute these functions by sweeping through the isovalues, altering
the function as each vertex is passed. Figure 4 illustrates this
process, showing the contours immediately above and below a vertex
s. As the plane sweeps past s, the function is unchanged in cells
outside the neighbourhood of s, but changes inside the neighbourhood
of s. This sweep computes global geometric properties for
the region above the sweep plane. Reversing the direction of the
sweep computes global geometric properties for the region below
500
the sweep plane.
In Figure 2, the region above the sweep plane at 2000m consists
of two connected components, one defined by contours c
1
and c
2
,
the other by c
3
. To compute properties for these components, we
sweep along an edge of the contour tree, representing a single contour
sweeping through the data. This lets us compute functions for
the central maximum at B. For the crater rim defined by contours
c
1
and c
2
, we use inclusion/exclusion. We sweep one contour at
a time, computing properties for the region inside the contour, including
regions above and below the isovalue of the contour. The
area of the crater rim can then be computed by subtracting the area
inside contour c
2
from the area inside contour c
1
.
We define local geometric measures to be geometric properties of
regions bounded by a contour. We compute these measures in a
manner similar to the global sweep of [Bajaj et al. 1997], but by
sweeping contours along contour tree edges.
4.1
Local Geometric Measures
To define local geometric measures attached to contour tree edges,
we must be careful with terminology. Above and below do not apply
to the region inside c
1
in Figure 2, since part of the region is above
the contour and part is below. Nor do inside and outside, which lose
their meaning for contours that intersect the boundary. We therefore
define upstart and downstart regions of a contour. An upstart region
is a region reachable from the contour by paths that initially ascend
from the contour and never return to it. For contour c
1
, there is
one upstart region (inside) and one downstart region (outside). At
saddles such as D, there may be several upstart regions. Since each
such region corresponds to an edge in the contour tree, we refer, for
example, to the upstart region at D for arc CD.
We now define upstart and downstart functions: functions computed
for upstart or downstart regions. Note that the upstart and
downstart functions do not have to be the same. For example, the
length of a contour line is independent of sweep direction, so the
upstart and downstart functions for contour length in 2D are identical
. But the area enclosed by a contour depends on sweep direction,
so the upstart and downstart functions will be different.
Since upstart and downstart functions describe geometric properties
local to a contour, we refer to them collectively as local geometric
measures. These measures are piecewise polynomial since they
are piecewise polynomial in each cell. Because we need to track
connectivity for inclusion/exclusion, they are not strictly decomposable
. Stated another way, in order to make them decomposable,
we need to know the connectivity during the local sweep. We are
fortunate that the contour tree encodes this connectivity.
For regular data, we approximate region size with vertex count as
in [Takahashi et al. 2004b]. For the integral of f over region R,
we sum the sample values to get
x
R
f
(x): the correct integral is
the limit of this sum as sample spacing approaches zero. When
we prune a leaf to a saddle at height h, the integral over the region
flattened is
x
R
( f (x) - h) = (
x
R
f
(x)) - Ah where A is the area
of region R.
In three dimensions, vertex counting measures volume, and summing
the samples gives hypervolume. This geometric measure is
quite effective on the data sets we have tested in Section 6.
4.2
Combining Local Geometric Measures
To compute local geometric measures, we must be able to combine
upstart functions as we sweep a set of contours past a vertex. In
Figure 4, we must combine the upstart functions for contours c
1
, c
2
and c
3
before sweeping past s. We must then update the combined
upstart function as we sweep past the vertex.
After sweeping past s, we know the combined upstart function d for
contours d
1
, d
2
and d
3
. We remove the upstart functions for d
1
and
d
2
from d to obtain the upstart function for d
3
.
We assume that we have recursively computed the upstart functions
for d
1
and d
2
by computing the downstart functions and then inverting
them. Let us illustrate inversion, combination and removal for
two local geometric measures in two dimensions.
Contour Length: Contour length is independent of sweep direction
, so these operations are simple: Inversion is the identity operation
, combination sums the lengths of the individual contours, and
contours are removed by subtracting their lengths.
Area: Area depends on sweep direction, so inversion subtracts the
function from the area of the entire field. Combining upstart functions
at a saddle depends on whether the corresponding edges ascend
or descend from the saddle. For ascending edges the upstart
regions are disjoint, and the upstart functions are summed. For descending
edges the upstart regions overlap, and the upstart functions
are combined by inverting to downstart functions, summing,
and re-inverting. Removing upstart functions reverses combination.
Consider Figure 4 once more. The upstart region of d
1
contains
s, as well as contours c
1
, c
2
and c
3
. Similarly, the upstart regions
of d
2
and d
3
contain s and contours c
1
, c
2
and c
3
. However, the
downstart regions of d
1
, d
2
and d
3
are disjoint, and can be summed,
then inverted to obtain the combination of the upstart regions.
In general, measures of contour size are independent of sweep direction
and their computation follows the pattern of 2D contour
length. Such measures include surface area in three dimensions,
and hypersurface volume in four dimensions. Measures of region
size depend on sweep direction and their computation follows the
pattern of 2D cross-sectional area. Such measures include surface
area and volume in two dimensions, and isosurface cross-sectional
volume and hypervolume in three dimensions.
Input
: Fully Augmented Contour Tree C
A local geometric measure f with operations
Combine
( f
1
,..., f
m
) local geometric measures
Update
( f ,v) that updates f for sweep past v
Remove
( f , f
1
,..., f
m
) local geometric measures
Invert
() from down(e) to up(e) or vice versa
Output
: down
(e) and up(e) for each edge e in C
Make a copy C of C
1
for each vertex v do
2
If v is a leaf of C, enqueue v
3
while NumberOfArcs
(C ) > 0 do
4
Dequeue v and retrieve edge e
= (u,v) from C
5
Without loss of generality, assume e ascends from v
6
Let d
1
, . . . , d
k
be downward arcs at v in C
7
Let upBelow
= Combine(down(d
1
),...,down(d
k
)
8
Let upAbove
= Update(upBelow,v)
9
Let e
1
, . . . , e
m
be upwards arcs at v in C, with e
1
= e
10
Let f
i
= Invert(down(e
i
)) for i = 2,...,m
11
Let up
(e) = Remove(upAbove, f
2
,..., f
m
)
12
Let down
(e) = Invert(up(e))
13
Delete e from C
14
If u is now a leaf of C , enqueue u
15
Algorithm 1: Computing Local Geometric Measures
501
(a) Reduced by Height (Persistence)
(b) Reduced by Volume (Vertex Count)
(c) Reduced by Hypervolume (Riemann Sum)
Figure 5: Comparison of Simplification Using Three Local Geometric Measures. In each case, the UNC Head data set has been simplified to
92 edges using the specified measure. Each trees were laid out using the dot tool, with no manual adjustment.
4.3
Computing Local Geometric Measures
Algorithm 1 shows how to compute edge priorities up
(e) and
down
(e) for a given local geometric measure. This algorithm relies
on Combine
(), Update(), Invert(), and Remove() having been
suitably defined, and can be integrated into the merge phase of the
contour tree algorithm in [Carr et al. 2003].
The algorithm builds a queue of leaf edges in Step 2, then works
inwards, pruning edges as it goes. At each vertex, including regular
points, the computation described in Section 4.2 is performed, and
the edge is deleted from the tree. In this way, an edge is processed
only when one of its vertices is reduced to a leaf: i.e. when all other
edges at that vertex have already been processed.
Unlike simplification, Algorithm 1 requires the fully augmented
contour tree, which is obtained by adding every vertex in the input
mesh to the contour tree. This makes the algorithm linear in the
input size n rather than the tree size t: it cannot be used with the
algorithms of [Pascucci and Cole-McLaughlin 2002] and [Chiang
et al. 2002], which reduce running time by ignoring regular points.
4.4
Comparison of Local Geometric Measures
In Figure 5, we show the results of simplifying the UNC Head data
set with three different geometric measures: height (persistence),
volume, and hypervolume. In each case, the contour tree has been
reduced to 92 edges and laid out using dot with no manual intervention
.
In the left-hand image, height (persistence) is used as the geometric
measure. All of the edges shown are tall as a result, but on inspection
, many of these edges are caused by high-intensity voxels
in the skull or in blood vessels. Most of the corresponding objects
are quite small, while genuine objects of interest such as the eyes,
ventricular cavities and nasal cavity have already been suppressed,
because they are defined by limited ranges of voxel intensity. Also,
on the corresponding simplification curve, we observe that there
are a relatively large number of objects with large intensity ranges:
again, on further inspection, these tended to be fragments of larger
objects, particularly the skull.
In comparison, the middle image shows the results of using volume
(i.e. vertex count) as the geometric measure. Not only does this
focus attention on a few objects of relatively large spatial extent,
but the simplification curve shows a much more rapid drop-off, implying
that there are fewer objects of large volume than there are of
large height. Objects such as the eyeballs are represented, as they
have relatively large regions despite having small height. However,
we note that there are a large number of small-height edges at the
bottom of the contour tree. These edges turn out to be caused by
noise and artifacts outside the skull in the original CT scan, in which
large regions are either slightly higher or lower in isovalue than the
surrounding regions.
Finally, the right-hand image shows the results of using hypervolume
(the sum of sample values, as discussed above). In this case,
we see a very rapid dropoff of importance in the simplification
curve, with only 100 or so regions having significance. We note that
this measure preserves small-height features such as the eyeballs,
while eliminating most of the apparent noise edges at the bottom
of the tree, although at the expense of representing more skull fragments
than the volume measure. In general we have found that this
measure is better for data exploration than either height or volume,
since it balances representation of tall objects with representation
of large objects.
We do not claim that this measure is universally ideal: the choice
of simplification measure should be driven by domain-dependent
information. However, no matter what measure is chosen, the basic
mechanism of simplification remains.
Implementation
We have combined simplification with the flexible isosurface interface
of [Carr and Snoeyink 2003], which uses the contour tree
as a visual index to contours. The interface window, shown in Figures
1, 6, and 7, is divided into data, contour tree, and simplification
curve panels. The data panel displays the set of contours marked in
the contour tree panel. Contours can be selected in either panel,
then deleted, isolated, or have their isovalue adjusted. The simplification
curve panel shows a log-log plot of contour tree size against
"feature size": the highest cost of any edge pruned to reach the
given level of simplification. Selecting a point on this curve determines
the detail shown in the contour tree panel.
For efficiency, we compute contour trees for the surfaces given by
the Marching Cubes cases of [Montani et al. 1994] instead of a sim-502
plicial or trilinear mesh, because these surfaces generate roughly
60% fewer triangles than even a minimal simplicial subdivision
of the voxels, with none of the directional biases identified by
[Carr et al. 2001], and because they are significantly simpler to
compute than the trilinear interpolant used by [Pascucci and Cole-McLaughlin
2002]. There is a loss of accuracy, but since our simplification
discards small-scale details of the topology anyway, little
would be gained from more complex interpolants.
Finally, as in [Carr et al. 2003; Pascucci and Cole-McLaughlin
2002; Chiang et al. 2002], we use simulation of simplicity [Edelsbrunner
and Mucke 1990] to guarantee uniqueness of isovalues,
then collapse zero-height edges in the tree. Implementation details
can be found in [Carr 2004].
Results and Discussion
We used a variety of data sets to test these methods, including results
from numerical simulations (Nucleon, Silicium, Fuel, Neghip,
Hydrogen), analytical methods (ML, Shockwave), CT-scans (Lobster
, Engine, Statue, Teapot, Bonsai), and X-rays (Aneurysm, Foot,
Skull). Table 1 lists the size of each data set, the size of the unsimplified
contour tree, the time for constructing the unsimplified contour
tree, and the simplification time. Times were obtained using a
3 GHz Pentium 4 with 2 GB RAM, and the hypervolume measure.
Data Set
Data
Tree
Size
Size
CT (s)
ST (s)
Nucleon
41
41 41
49
0.28
0.01
ML
41
41 41
695
0.25
0.01
Silicium
98
34 34
225
0.41
0.01
Fuel
64
64 64
129
0.72
0.01
Neghip
64
64 64
248
0.90
0.01
Shockwave
64
64512
31
5.07
0.01
Hydrogen
128
128128
8
5.60
0.01
Lobster
301
324 56
77,349
19.22
0.10
Engine
256
256128
134,642
31.51
0.18
Statue
341
341 93
120,668
32.20
0.15
Teapot
256
256178
20,777
33.14
0.02
Aneurysm
256
256256
36,667
41.83
0.04
Bonsai
256
256256
82,876
49.71
0.11
Foot
256
256256
508,854
67.20
0.74
Skull
256
256256
931,348
109.73
1.47
CT Head
106
256256
92,434
21.30
0.12
UNC Head
109
256256
1,573,373
91.23
2.48
Tooth
161
256256
338,300
39.65
0.48
Rat
240
256256
2,943,748
233.33
4.97
Table 1: Data sets, unsimplified contour tree sizes, and contour tree
construction time (CT) and simplification time (ST) in seconds.
The size of the contour tree is proportional to the number of local
extrema in the input data. For analytic and simulated data sets, such
as the ones shown in the upper half of Table 1, this is much smaller
than the input size. For noisy experimentally acquired data, such as
the ones shown in the lower half of Table 1, the size of the contour
tree is roughly proportional to the input size. The time required to
simplify the contour tree using local geometric measures is typi-cally
less than one percent of the time of constructing the original
contour tree, plus the additional cost of pre-computing these measures
during contour tree construction.
6.1
Examples of Data Exploration
Figure 1 shows the result of exploring of the UNC Head data set
using simplified contour trees. An appropriate level of simplification
was chosen on the simplification curve and individual contours
explored until the image shown was produced. Surfaces identifiable
as part of the skull were not chosen because they occluded the view
of internal organs, although two contours for the ventricular system
were chosen despite being occluded by the brain surrounding them.
The flexible isosurface interface is particularly useful in this context
because it lets one manipulate a single contour at a time, as shown
in the video submitted with this paper.
embryo
gut?
lungs
eyes
brain
windpipe?
shoulder
blades
breastbone
Figure 6: A Pregnant Rat MRI (240
256256). Despite low quality
data, simplifying the contour tree from 2,943,748 to 125 edges
allows identification of several anatomical features.
spinal column
spinal cord
ventricles
spinal cord
spinal column
ventricles
Figure 7: CT of a Skull (256
256 106). Simplification of the
contour tree from 92,434 to 20 edges isolates the ventricular cavity,
spinal cord and spinal column.
Similarly, Figure 6 shows the result of a similar exploration of a
240
256 256, low-quality MRI scan of a rat from the Whole
Frog Project at http://www-itg.lbl.gov/ITG.hm.pg.docs/
Whole.Frog/Whole.Frog.html. Again, simplification reduces
the contour tree to a useful size. Figure 7 shows a spinal column,
spinal cord and ventricular cavity identified in a 256
256 106
CT data set from the University of Erlangen-Nuremberg. Other examples
may be seen on the accompanying video.
Each of these images took less than 10 minutes to produce after
all pre-processing, using the dot tool from the graphviz package
(http://www.research.att.com/sw/tools/graphviz/)
to lay out the contour tree: we generally then made a few adjustments
to the node positions for clarity. Although dot produces reasonable
layouts for trees with 100 200 nodes, it is slow, sometimes
taking several minutes, and the layout computed usually becomes
unsatisfactory as edges are added or subtracted from the tree.
503
Note that in none of these cases was any special constant embedded
in the code the result is purely a function of the topology of the
isosurfaces of the input data.
Conclusions and Future Work
We have presented a novel algorithm for the simplification of contour
trees based on local geometric measures. The algorithm is online
, meaning that simplifications can be done and undone at any
time. This addresses the scalability problems of the contour tree
in exploratory visualization of 3D scalar fields. The simplification
can also be reflected back onto the input data to produce an on-line
simplified scalar field. The algorithm is driven by local geometric
measures such as area and volume, which make the simplifications
meaningful. Moreover, the simplifications can be tailored to a particular
application or data set.
We intend to explore several future directions. We could compute a
multi-dimensional feature vector of local geometric measures, and
allow user-directed simplification of the contour tree, with different
measures being applied in different regions of the function.
The simplified contour tree also provides a data structure for
queries. With local feature vectors one could efficiently answer
queries such as "Find all contours with volume of at least 10 units
and an approximate surface-area-to-volume ratio of 5." If information
about spatial extents (e.g., bounding boxes) is computed, then
spatial constraints can also be included. Inverse problems could
also be posed given examples of a feature (e.g., a tumor), what
should the query constraints be to find such features?
Some interface issues still need resolution, such as finding a fast
contour tree layout that is clear over a wide range of levels of simplification
but which also respects the convention that the y-position
depends on the isovalue. We would also like to annotate contours
using the flexible isosurface interface, rather than after the fact as
we have done in Figure 1 and Figures 6 7, and to enable local
simplification of the contour tree rather than the single-parameter
simplification presented here.
Isosurfaces are not the only way of visualizing volumetric data.
Other methods include boundary propagation using level set methods
or T-snakes. We believe that simplified contour trees can provide
seeds for these methods, either automatically or through user
interaction. We are adapting the flexible isosurface interface to generate
transfer functions for volume rendering. These transfer functions
would add spatial locality to volume rendering, based on the
regions corresponding to edges of the simplified contour tree.
Another possible direction is to develop more local geometric measure
for multilinear interpolants. Lastly, the algorithms we describe
work in arbitrary dimensions, but special consideration should be
given to simplification of contour trees for time-varying data.
Acknowledgements
Acknowledgements are due to the National Science and Engineering
Research Council of Canada (NSERC) for support in the
form of post-graduate fellowships and research grants, and to
the U.S. National Science Foundation (NSF) and the Institute for
Robotics and Intelligent Systems (IRIS) for research grants. Acknowledgements
are also due to those who made volumetric data
available at volvis.org and other sites.
References
B
AJAJ
, C. L., P
ASCUCCI
, V.,
AND
S
CHIKORE
, D. R. 1997. The Contour Spectrum.
In Proceedings of IEEE Visualization 1997, 167173.
B
ENTLEY
, J. L. 1979. Decomposable searching problems. Inform. Process. Lett. 8,
244251.
B
REMER
, P.-T., E
DELSBRUNNER
, H., H
AMANN
, B.,
AND
P
ASCUCCI
, V. 2003. A
Multi-resolution Data Structure for Two-dimensional Morse-Smale Functions. In
Proceedings of IEEE Visualization 2003, 139146.
B
RODLIE
, K.,
AND
W
OOD
, J. 2001. Recent advances in volume visualization. Computer
Graphics Forum 20, 2 (June), 125148.
C
ARR
, H.,
AND
S
NOEYINK
, J. 2003. Path Seeds and Flexible Isosurfaces: Using
Topology for Exploratory Visualization. In Proceedings of Eurographics Visualization
Symposium 2003, 4958, 285.
C
ARR
, H., M
OLLER
, T.,
AND
S
NOEYINK
, J. 2001. Simplicial Subdivisions and
Sampling Artifacts. In Proceedings of IEEE Visualization 2001, 99106.
C
ARR
, H., S
NOEYINK
, J.,
AND
A
XEN
, U. 2003. Computing Contour Trees in All
Dimensions. Computational Geometry: Theory and Applications 24, 2, 7594.
C
ARR
, H. 2004. Topological Manipulation of Isosurfaces. PhD thesis, University of
British Columbia, Vancouver, BC, Canada.
C
HIANG
, Y.-J.,
AND
L
U
, X. 2003. Progressive Simplification of Tetrahedral Meshes
Preserving All Isosurface Topologies. Computer Graphics Forum 22, 3, to appear.
C
HIANG
, Y.-J., L
ENZ
, T., L
U
, X.,
AND
R
OTE
, G. 2002. Simple and Output-Sensitive
Construction of Contour Trees Using Monotone Paths. Tech. Rep. ECG-TR
-244300-01, Institut f ur Informatik, Freie Universtat Berlin.
E
DELSBRUNNER
, H.,
AND
M
UCKE
, E. P. 1990. Simulation of Simplicity: A technique
to cope with degenerate cases in geometric algorithms. ACM Transactions
on Graphics 9, 1, 66104.
E
DELSBRUNNER
, H., L
ETSCHER
, D.,
AND
Z
OMORODIAN
, A. 2002. Topological
persistence and simplification. Discrete Comput. Geom. 28, 511533.
E
DELSBRUNNER
, H., H
ARER
, J.,
AND
Z
OMORODIAN
, A. 2003. Hierarchical Morse-Smale
complexes for piecewise linear 2-manifolds. Discrete Comput. Geom. 30,
87107.
H
ILAGA
, M., S
HINAGAWA
, Y., K
OHMURA
, T.,
AND
K
UNII
, T. L. 2001. Topology
matching for fully automatic similarity estimation of 3d shapes. In SIGGRAPH
2001, 203212.
K
ETTNER
, L., R
OSSIGNAC
, J.,
AND
S
NOEYINK
, J. 2001. The Safari Interface for
Visualizing Time-Dependent Volume Data Using Iso-surfaces and Contour Spectra.
Computational Geometry: Theory and Applications 25, 1-2, 97116.
L
ORENSON
, W. E.,
AND
C
LINE
, H. E. 1987. Marching Cubes: A High Resolution
3D Surface Construction Algorithm. Computer Graphics 21, 4, 163169.
M
ATSUMOTO
, Y. 2002. An Introduction to Morse Theory. AMS.
M
ILNOR
, J. 1963. Morse Theory. Princeton University Press, Princeton, NJ.
M
ONTANI
, C., S
CATENI
, R.,
AND
S
COPIGNO
, R. 1994. A modified look-up table
for implicit disambiguation of Marching Cubes. Visual Computer 10, 353355.
P
ASCUCCI
, V.,
AND
C
OLE
-M
C
L
AUGHLIN
, K. 2002. Efficient Computation of the
Topology of Level Sets. In Proceedings of IEEE Visualization 2002, 187194.
P
ASCUCCI
, V. 2001. On the Topology of the Level Sets of a Scalar Field. In Abstracts
of the 13th Canadian Conference on Computational Geometry, 141144.
R
EEB
, G.
1946.
Sur les points singuliers d'une forme de Pfaff compl`etement
integrable ou d'une fonction numerique. Comptes Rendus de l'Acad
`emie des Sciences
de Paris 222, 847849.
T
AKAHASHI
, S., F
UJISHIRO
, I.,
AND
T
AKESHIMA
, Y. 2004. Topological volume
skeletonization and its application to transfer function design. Graphical Models
66, 1, 2449.
T
AKAHASHI
, S., N
IELSON
, G. M., T
AKESHIMA
, Y.,
AND
F
UJISHIRO
, I. 2004.
Topological Volume Skeletonization Using Adaptive Tetrahedralization. In Geometric
Modelling and Processing 2004.
T
ARASOV
, S. P.,
AND
V
YALYI
, M. N. 1998. Construction of Contour Trees in 3D
in O
(nlogn) steps. In Proceedings of the 14th ACM Symposium on Computational
Geometry, 6875.
VAN
K
REVELD
, M.,
VAN
O
OSTRUM
, R., B
AJAJ
, C. L., P
ASCUCCI
, V.,
AND
S
CHIKORE
, D. R. 1997. Contour Trees and Small Seed Sets for Isosurface Traver-sal
. In Proceedings of the 13th ACM Symposium on Computational Geometry,
212220.
504 | Isosurfaces;topological simplification;contour trees |
18 | A Resilient Packt-Forwarding Scheme against Maliciously Packet-Dropping Nodes in Sensor Networks | This paper focuses on defending against compromised nodes' dropping of legitimate reports and investigates the misbehavior of a maliciously packet-dropping node in sensor networks . We present a resilient packet-forwarding scheme using Neighbor Watch System (NWS), specifically designed for hop-by-hop reliable delivery in face of malicious nodes that drop relaying packets, as well as faulty nodes that fail to relay packets. Unlike previous work with multipath data forwarding, our scheme basically employs single-path data forwarding, which consumes less power than multipath schemes. As the packet is forwarded along the single-path toward the base station, our scheme, however, converts into multipath data forwarding at the location where NWS detects relaying nodes' misbehavior. Simulation experiments show that, with the help of NWS, our forwarding scheme achieves a high success ratio in face of a large number of packet-dropping nodes, and effectively adjusts its forwarding style, depending on the number of packet-dropping nodes en-route to the base station. | INTRODUCTION
Wireless sensor networks consist of hundreds or even thousands
of small devices each with sensing, processing, and
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SASN'06, October 30, 2006, Alexandria, Virginia, USA.
Copyright 2006 ACM 1-59593-554-1/06/0010 ...
$
5.00.
communicating capabilities to monitor the real-world environment
. They are envisioned to play an important role
in a wide variety of areas ranging from critical military-surveillance
applications to forest fire monitoring and the
building security monitoring in the near future. In such a
network, a large number of sensor nodes are distributed to
monitor a vast field where the operational conditions are
harsh or even hostile. To operate in such environments, security
is an important aspect for sensor networks and security
mechanisms should be provided against various attacks
such as node capture, physical tampering, eavesdropping,
denial of service, etc [23, 33, 38].
Previous research efforts against outsider attacks in key-management
schemes [4, 13, 32] and secure node-to-node
communication mechanisms [24, 32] in sensor networks are
well-defined.
Those security protections, however, break
down when even a single legitimate node is compromised.
It turns out to be relatively easy to compromise a legitimate
node [14], which is to extract all the security information
from the captured node and to make malicious code
running for the attacker's purpose.
Even a small number of compromised nodes can pose
severe security threats on the entire part of the network,
launching several attacks such as dropping legitimate reports
, injecting bogus sensing reports, advertising inconsistent
routing information, eavesdropping in-network communication
using exposed keys, etc. Such disruption by the
insider attacks can be devastating unless proper security
countermeasures against each type of attacks are provided.
In reality, detecting all of the compromised nodes in the
network is not always possible, so we should pursue graceful
degradation [35], with a small number of compromised
nodes. The fundamental principle for defense against the
insider attacks is to restrict the security impact of a node
compromise as close to the vicinity of the compromised node
as possible.
When the attacker compromises a legitimate node, it may
first try to replicate the captured node indefinitely with the
same ID and spread them over the network. Against such
attacks, a distributed detection mechanism (based on emergent
properties [11]) has been proposed by Parno et al. [31].
In addition, Newsome et al. [30] have presented the techniques
that prevent the adversary from arbitrarily creating
new IDs for nodes.
Using cryptographic information obtained from a captured
node, attackers can establish pairwise keys with any
legitimate nodes in order to eavesdrop communication any-59
where in the network. Localized key-establishment scheme
by Zhu et al. [46] is a good solution against such an insider
attack. Since the scheme does not allow a cloned node
(by inside-attackers) to establish pairwise keys with any legitimate
nodes except the neighbors of the compromised
nodes, the cryptographic keys extracted from the compromised
node are of no use for attackers.
Compromised nodes can also inject false sensing reports
to the network (i.e. report fabrication attacks [39]), which
causes false alarms at the base station or the aggregation
result to far deviate from the true measurement. Proposed
en-route filtering mechanisms [8, 39, 41, 44, 47] that detect
and drop such false reports effectively limit the impact
of this type of attacks. Also, proposed secure aggregation
protocols [34, 40] have addressed the problem of false data
injection, and they ensure that the aggregated result is a
good approximation to the true value in the presence of a
small number of compromised nodes.
Advertising inconsistent routing information by compromised
nodes can disrupt the whole network topology. Hu et
al. [19, 20] have proposed SEAD, a secure ad-hoc network
routing protocol that uses efficient one-way hash functions
to prevent any inside attackers from injecting inconsistent
route updates. A few secure routing protocols [6, 27] in sensor
networks have been proposed to detect and exclude the
compromised nodes injecting inconsistent route updates.
Compromised nodes also can silently drop legitimate reports
(i.e. selective forwarding attacks [23]), instead of forwarding
them to the next-hop toward the base station. Since
data reports are delivered over multihop wireless paths to
the base station, even a small number of strategically-placed
packet-dropping nodes can deteriorate the network throughput
significantly. In order to bypass such nodes, most work
on secure routing and reliable delivery in sensor networks relies
on multipath forwarding scheme [5, 6, 7, 10], or interleaved-mesh
forwarding scheme [26, 29, 39, 42].
Among the insider attacks described above, this paper focuses
on defense against compromised nodes' dropping of legitimate
reports and we present a resilient packet-forwarding
scheme using Neighbor Watch System (NWS) against maliciously
packet-dropping nodes in sensor networks. We investigate
the misbehavior of a maliciously packet-dropping
node and show that an acknowledgement (ACK) that its
packets were correctly received at the next-hop node does
not guarantee reliable delivery from the security perspective.
NWS is specifically designed for hop-by-hop reliable delivery
in face of malicious nodes that drop relaying packets,
as well as faulty nodes that fail to relay packets. Unlike previous
work [10, 29, 42] with multipath data forwarding, our
scheme basically employs single-path data forwarding, which
consumes less power than multipath schemes. As the packet
is forwarded along the single-path toward the base station,
our scheme, however, converts into multipath data forwarding
at the location where NWS detects relaying nodes' misbehavior
.
NWS exploits the dense deployment of large-scale static
sensor networks and the broadcast nature of communication
pattern to overhear neighbors' communication for free.
The contribution of this paper is two-fold. First, we investigate
the misbehavior of a maliciously packet-dropping
node and propose a resilient packet-forwarding scheme, which
basically employs single-path data forwarding, in face of
such nodes, as well as faulty nodes. Second, our scheme
can work with any existing routing protocols. Since it is
designed not for securing specific protocols but for universal
protocols, it can be applied to any existing routing protocols
as a security complement.
The rest of paper is organized as follows. Background is
given in Section 2. We present our resilient packet-forwarding
scheme in Section 3. An evaluation of the scheme is given
and discussed in Section 4. We present conclusions and future
work in Section 5.
BACKGROUND
Sensor networks typically comprise one or multiple base
stations and hundreds or thousands of inexpensive, small,
static, and resource-constrained nodes scattered over a wide
area.
An inexpensive sensor node cannot afford tamper-resistant
packaging. We assume that a large number of sensor
nodes are deployed in high density over a vast field, such
that the expected degree of a node is high; each sensor has
multiple neighbors within its communication range. Sensing
data or aggregated data are sent along the multihop route
to the base station. We assume that each sensor node has
a constant transmission range, and communication links are
bidirectional.
Our sensor network model employs a key-establishment
scheme that extends the one in LEAP [46] where the impact
of a node compromise is localized in the immediate
neighborhood of the compromised node, and our scheme is
based on it. To evolve from LEAP, we will describe it briefly
in Section 2.4.
2.2
Threat Model
The attacks launched from outsiders hardly cause much
damage to the network, since the rouge node, which does not
possesses the legitimate credentials (e.g. the predistributed
key ring from the key pool [13]), fails to participate in the
network. On the other hand, there may be multiple attacks
from insiders (e.g.
dropping legitimate reports, injecting
false sensing reports, advertising inconsistent route information
, and eavesdropping in-network communication using
exposed keys, etc), and the combination of such attacks
can lead to disruption of the whole network. Thus, proper
security countermeasures (specifically designed to protect
against each type of the attacks) should be provided.
Among them, in this paper, we focus on defending against
compromised nodes' dropping of legitimate reports; Other
attacks mentioned above are effectively dealt with by several
proposed schemes as described in the previous section.
We consider a packet-dropping node as not merely a faulty
node, but also an arbitrarily malicious node. Some previous
work [3, 29, 36] on reliable delivery uses an acknowledgement
(ACK) that its packets were correctly received at the
next-hop node, in order to find out unreliable links. However
, in the presence of maliciously packet-dropping nodes,
simply receiving ACK from a next-hop node does not guarantee
that the packet will be really forwarded by the next-hop
node. For example, node u forwards a packet to compromised
node v, and node u waits for ACK from node v.
Node v sends back ACK to node u, and then node v silently
drops the packet. This simple example shows that receiving
ACK is not enough for reliable delivery in face of maliciously
packet-dropping nodes.
60
For more reliability, we should check whether the next-hop
node really forwards the relaying packet to its proper
next-hop node. Fortunately, due to the broadcast nature of
communication pattern in sensor networks, we can overhear
neighbors' communication for free (for now per-link encryption
is ignored). After forwarding a packet to next-hop node
v and buffering recently-sent packets, by listening in on node
v's traffic, we can tell whether node v really transmits the
packet. Watchdog [28] mechanism (extension to DSR [22]),
implicit ACK in M
2
RC [29], and local monitoring in DICAS
[25] detect misbehaving nodes in this way. However,
this kind of simple overhearing schemes does not guarantee
reliable delivery, either.
With arbitrarily malicious nodes, we should be assured
that the node, to which the next-hop node forwards the
relaying packet, is really a neighbor of the next-hop node.
For example, node u forwards a packet to compromised node
v, and node u listens in on node v's traffic to compare each
overheard packet with the packet in the buffer.
Node v
transmits the relaying packet whose intended next-hop id
marked with any id in the network such as x that is not a
neighbor of v. Then node u overhears this packet from node
v, and considers it forwarded correctly despite the fact that
none actually receives the packet. The packet is eventually
dropped without being detected. We refer to this attack as
blind letter attack.
We consider packet-dropping attacks to be addressed in
this paper as ones ranging from the naive case (e.g. a faulty
node) to the most malicious one (e.g.
a node launching
blind letter attack). We focus on developing a solution to
such attacks.
2.3
Notation
We use the following notation throughout the paper:
u, v are principals, such as communicating nodes.
R
u
is a random number generated by u.
f
K
is a family of pseudo-random function [12].
MAC(K, M
1
|M
2
) denotes the message authentication
code (MAC) of message - concatenation of M
1
and M
2
,
with MAC key K.
2.4
Key-Establishment Scheme in LEAP
LEAP supports the establishment of four types of keys for
each sensor node - an individual key shared with the base
station, a pairwise key shared with its neighbor, a cluster
key shared with its surrounding neighbors, and a group key
shared by all the nodes in the networks.
It assumes that the time interval T
est
for a newly deployed
sensor node to complete the neighbor discovery phase (e.g.
tens of seconds) is smaller than the time interval T
min
that is
necessary for the attacker to compromise a legitimate node
(i.e. T
min
> T
est
). Some existing work [1, 39] has made
similar assumptions, which are believed to be reasonable.
The four steps for a newly added node u to establish a
pairwise key with each of its neighbors are as follows:
1. Key Pre-distribution. Each node u is loaded with
a common initial key K
I
, and derives its master key
K
u
= f
K
I
(u).
2. Neighbor Discovery. Once deployed, node u sets
up a timer to fire after time T
min
, broadcasts its id,
and waits for each neighbor v's ACK. The ACK from
v is authenticated using the master key K
v
of node v.
Since node u knows K
I
, it can derive K
v
= f
K
I
(v).
u - :
u, R
u
.
v - u :
v, M AC(K
v
, R
u
|v).
3. Pairwise Key Establishment. Node u computes its
pairwise key with v, K
uv
, as K
uv
= f
K
v
(u). Node v
also computes K
uv
in the same way. K
uv
serves as
their pairwise key.
4. Key Erasure. When its timer expires, node u erases
K
I
and all the master keys of its neighbors. Every
node, however, keeps its own master key, in order to
establish pairwise keys with later-deployed nodes.
Once erasing K
I
, a node will not be able to establish a
pairwise key with any other nodes that have also erased K
I
.
Without K
I
, a cloned node (by an attacker compromising a
legitimate node after T
min
) fails to establish pairwise keys
with any nodes except the neighbors of the compromised
node. In such a way, LEAP localizes the security impact of
a node compromise.
A RESILIENT PACKET-FORWARDING SCHEME USING NEIGHBOR WATCH SYSTEM
In this section, we present our resilient packet-forwarding
scheme using Neighbor Watch System (NWS). NWS works
with the information provided by Neighbor List Verification
(NLV) to be described in Section 3.2.
3.1
Neighbor Watch System
Our scheme seeks to achieve hop-by-hop reliable delivery
in face of maliciously packet-dropping nodes, basically employing
single-path forwarding. To the best of our knowledge
, proposed works so far rely on multipath forwarding
or diffusion-based forwarding, exploiting a large number of
nodes in order to deliver a single packet. ACK-based technique
is not a proper solution at all as explained in the
previous section.
With NWS, we can check whether the next-hop node really
forwards the relaying packet to the actual neighbor of
the next-hop node. The basic idea of our scheme is as follows
:
1. Neighbor List Verification. After deployment, during
neighbor discovery phase, every node u gets to
know of not only its immediate neighbors, but also the
neighbors' respective neighbor lists (i.e. u's neighbors'
neighbor lists). The lists are verified using Neighbor
List Verification to be described in Section 3.2. Every
node stores its neighbors' neighbor lists in the neighbor
table.
2. Packet Forwarding to Next-hop. If node u has
a packet to be relayed, it buffers the packet and forwards
the packet (encrypted with cluster key of node
u so that neighbors of node u can overhear it) to its
next-hop node v. As in LEAP, a cluster key is a key
shared by a node and all its neighbors, for passive participation
.
61
u
v
?
w
y
Figure 1:
Neighbor Watch System.
Sub-watch
nodes w and y, as well as primary-watch node u listen
in on v's traffic.
3. Designation of Watch Nodes.
Overhearing the
packet from node u to node v, among neighbors of
node u, the nodes that are also neighbors of node v (in
Figure 1, nodes w and y) are designated as sub-watch
nodes and store the packet in the buffer. Other nodes
(that are not neighbors of node v) discard the packet.
Node u itself is a primary-watch node. A primary-watch
node knows which nodes are sub-watch nodes,
since every node has the knowledge of not only its
neighbors but also their respective neighbor lists.
4. Neighbor Watch by Sub-Watch Node. Sub-watch
nodes w and y listen in on node v's traffic to compare
each overheard packet with the packet in the buffer.
To defend against blind letter attack, each of them
also checks whether the packet's intended next-hop is
a verified neighbor of node v, by looking up the neighbor
table. If all correct, the packet in the buffer is
removed and the role of the sub-watch node is over.
If the packet has remained in the buffer for longer
than a certain timeout, sub-watch nodes w and y forward
the packet (encrypted with their respective cluster
keys) to their respective next-hop nodes other than
node v. Then the role of a sub-watch node is over (each
of them is now designated as a primary-watch node for
the packet it has forwarded).
5. Neighbor Watch by Primary-Watch Node. Primary-watch
node u does the same job as sub-watch nodes.
The only difference, however, is that it listens in on
not only node v's traffic, but also sub-watch nodes w's
and y's. If the packet is correctly forwarded on by at
least one of them (nodes v, w, or y), primary-watch
node u removes the packet in the buffer and the role
of the primary-watch node is over.
Otherwise, after a certain timeout, primary-watch node
u forwards the packet (encrypted with its cluster key)
to its next-hop other than node v.
As the packet is forwarded on, this procedure (except for
Neighbor List Verification) of NWS is performed at each
hop so that hop-by-hop reliable delivery can be achieved
with mainly depending on single-path forwarding. On the
other hand, in the previous approaches [29, 39, 42], when
forwarding a packet, a node broadcasts the packet with no
designated next-hop, and all neighbors with smaller costs
1
1
The cost at a node is the minimum energy overhead to
Base
Station
u
v
?
?
Figure 2:
An example of our packet-forwarding
scheme. Only the nodes that relay the packet are
presented. With the help of sub-watch nodes (grey
ones), our scheme bypasses two packet-dropping
nodes en-route to the base station.
or within a specific geographic region continue forwarding
the packet anyway. For example, in Figure 1, if nodes v,
w, and y have smaller costs than node u in the previous
approaches, they all forward
2
the packet from node u. In
our scheme, however, sub-watch nodes w and y are just on
watch in designated next-hop node v, instead of uncondi-tionally
forwarding the packet. If no packet-dropping occurs
en-route to the base station, the packet may be forwarded
along single-path all the way through.
However, a packet-dropping triggers the multipath forwarding
for the dropped packet. If the designated next-hop
node v in Figure 1 has not forwarded the relaying packet to
its certified neighbor by a certain timeout, sub-watch nodes
w and y forward the packet to their respective next-hop.
At the point, the packet is sent over multiple paths. Since
the location where the packet-dropping occurs is likely in
an unreliable region, this prompt reaction of the conversion
to multipath forwarding augments the robustness in our
scheme. The degree of multipath depends on the number of
the sub-watch nodes. Figure 2 shows an example of our
packet-forwarding scheme, bypassing two packet-dropping
nodes en-route to the base station. If a node utilizes a cache
[16, 21] for recently-received packets, it can suppress the
same copy of previously-received one within a certain timeout
, as nodes u and v in Figure 2.
Our scheme requires that a relaying packet should be encrypted
with a cluster key of a forwarding node, in order
that all its neighbors can decrypt and overhear it. In fact,
per-link encryption provides better robustness to a node
compromise, since a compromised node can decrypt only
the packets addressed to it. Thus, there exists a tradeoff
between resiliency against packet-dropping and robustness
to a node compromise. However, encryption with a cluster
key provides an intermediate level of robustness to a node
compromise [24] (a compromised node can overhear only
its immediate neighborhood), and also supports local broadcast
(i.e. resiliency against packet-dropping), so that we can
achieve graceful degradation in face of compromised nodes.
forward a packet from this node to the base station.
2
It is the broadcast transmission with no designated next-hop
, and, if needed, the packet should be encrypted with a
cluster key in order for all neighbors to overhear it.
62
To make our scheme work (against blind letter attack), we
must address the problem of how a node proves that it really
has the claimed neighbors. It is the identical problem of
how a node verifies the existence of its neighbors' neighbors.
Apparently, a node has the knowledge of its direct neighbors
by neighbor discovery and pairwise key establishment
phases. However, in the case of two-hop away neighbors,
as in Figure 1, malicious node v can inform its neighbor u
that it also has neighbor node x (any possible id in the network
) which in fact is not a neighbor of node v. Node u has
to believe it, since node x is not a direct neighbor of node
u, and only the node v itself knows its actual surrounding
neighbors. Then, how do we verify the neighbors' neighbors
? The answer to this critical question is described in
the next subsection.
3.2
Neighbor List Verification
To verify neighbors' neighbors, we present Neighbor List
Verification (NLV) which extends the pairwise key establishment
in LEAP. During neighbor discovery in LEAP, two
messages are exchanged between neighbors to identify each
other. On the other hand, NLV adopts three-way handshaking
neighbor discovery, in order to identify not only communicating
parties but also their respective neighbors.
NLV has two cases of neighbor discovery. One is that
neighbor discovery between two nodes that are both still
within the initial T
min3
(referred as pure nodes). The other
is that neighbor discovery between a newly-deployed node
within the initial T
min
and an existing node over the initial
T
min
(referred as an adult node).
Neighbor Discovery between Pure Nodes. Neighbor
list verification process between pure nodes is quite simple.
If a pure node broadcasts its neighbor list before the elapse of
its initial T
min
, we can accept the list as verifiable. Thus, the
key point here is to keep track of each other's T
min
, and to
make sure that both broadcast their respective neighbor lists
before their respective T
min
. The following shows the three-way
handshaking neighbor discovery between pure node u
and v:
u - : u, R
u
.
v - u : v, T
v
, R
v
M
v
, M AC(K
v
, R
u
|K
u
|M
v
).
u - v : u, T
u
M
u
, M AC(K
uv
, R
v
|M
u
).
where T
v
and T
u
are the amount of time remaining until
T
min
of v and T
min
of u, respectively. Once deployed, node
u sets up a timer to fire after time T
min
. Then, it broadcasts
its id, and waits for each neighbor v's ACK. The ACK from
every neighbor v is authenticated using the master key K
v
of
node v. Since node u knows K
I 4
, it can derive K
v
= f
K
I
(v).
The ACK from node v contains T
v
, the amount of time
remaining until T
min
of node v. If T
v
is a non-zero value,
node v claims to be a pure node. K
u
in MAC proves node
v to be a pure node, since pure node v should know K
I
and derive K
u
= f
K
I
(u). Node u records
T
v
(T
v
added
3
T
min
is the time interval, necessary for the attacker to compromise
a legitimate node as in LEAP [46].
4
Each node u is loaded with a common initial key K
I
, and
derives its master key K
u
= f
K
I
(u). After time T
min
, node
u erases K
I
and all the master keys of its neighbors.
u
v
w
x
t
z
r
q
Figure 3: Neighbor Discovery between Pure node x
and Adult node u. Grey and white nodes represent
adult and pure nodes, respectively.
to the current time of node u) in the entry for node v in
the neighbor table. Node u computes its pairwise key with
v, K
uv
= f
K
v
(u).
5
Node u also generates M AC(K
v
, v|u)
(which means that v certifies u as an immediate neighbor),
and stores it as a certificate.
The ACK from node u also contains T
u
, the amount of
time remaining until T
min
of u. This ACK is authenticated
using their pairwise key K
uv
, which proves node u a pure
node and u's identity. Node v then records
T
u
(T
u
added
to the current time of v) in the entry for u in the neighbor
table. It also generates M AC(K
u
, u|v) and stores it as a
certificate. Then, the three-way handshaking is done.
Every pure node u broadcasts its neighbor list just prior
to T
min
of u. Each receiving neighbor v checks whether the
receiving time at v is prior to
T
u
in the neighbor table. If
yes, the neighbor list of u is now certified by each neighbor v.
Neighbor Discovery between A Pure Node and An
Adult node. After most nodes have completed bootstrapping
phase, new nodes can be added in the network. Consider
Figure 3. The issue here is how adult node u can assure
its existing neighbors (v and w) of the existence of its
newly-added neighbor x. This is a different situation from
the above neighbor list verification case between two pure
nodes. Thus, the messages exchanged during the three-way
handshaking are somewhat different in this case. The following
shows the three-way handshaking neighbor discovery
between pure node x and adult node u:
x- :
x, R
x
.
u- x :
u, T
u
, R
u
, v,
certif icate
M AC(K
v
, v|u), w,
certif icate
M AC(K
w
, w|u)
M
u
, M AC(K
u
, R
x
|M
u
).
x- u :
x, T
x
,
certif icate
M AC(K
x
, x|u), v,
one-time cert.
M AC(K
v
, x|u), w,
one-time cert.
M AC(K
w
, x|u)
M
x
, M AC(K
xu
, R
u
|M
x
).
Newly-added node x sets up a timer to fire after time T
min
.
Then, it broadcasts its id, and waits for each neighbor u's
5
Node v also computes K
uv
in the same way. K
uv
serves as
their pairwise key.
63
ACK. The ACK from every neighbor u is authenticated using
the master key K
u
of node u. Since node x knows K
I
,
it can derive K
u
= f
K
I
(u). The ACK from node u contains
T
u
, the amount of time remaining until T
min
of u. If T
u
is
zero, node u is an adult node that may already have multiple
neighbors as in Figure 3. Node u reports its certified
neighbor list (v and w) to x by including their respective
certificates in the ACK. Node x verifies u's neighbor list by
examining each certificate, since x can generate any certificate
with K
I
. If all correct, x computes its pairwise key with
u, K
xu
= f
K
u
(x). Node x also generates M AC(K
u
, u|x) and
stores it as a certificate.
The ACK from x also contains T
x
, the amount of time
remaining until T
min
of x. This ACK is authenticated using
their pairwise key K
xu
, which proves node x a pure node
and x's identity. Node u then records
T
x
(T
x
added to the
current time of u) in the entry for x in the neighbor table.
Since adult node u cannot generate M AC(K
x
, x|u) by itself,
pure node x provides the certificate for u in the ACK. Node
x also provides one-time certificates
6
for each of u's certified
neighbors (v and w). Then, the three-way handshaking is
done.
After that, adult node u broadcasts one-time certificates
(from newly-discovered pure node x), in order to assure u's
existing neighbors (v and w) of the discovery of new neighbor
x. The packet containing one-time certificates is as follows:
u- :
u, x, v,
one-time cert.
M AC(K
v
, x|u), w,
one-time cert.
M AC(K
w
, x|u), K
A
u
M
u
, M AC(K
c
u
, M
u
).
where x is a new neighbor of u, K
A
u
is a local broadcast authentication
key in u's one-way key chain, K
c
u
is the cluster
key of u. Each receiving neighbor v of u verifies u's new
neighbor x by examining the one-time certificate designated
for v, M AC(K
v
, x|u)
6
. If ok, node x is now certified by each
neighbor v of u. Then, one-time certificates can be erased,
since they are of no use any more.
Broadcast authentication only with symmetric keys such
as cluster key K
c
u
fails to prevent an impersonation attack,
since every neighbor of u shares the cluster key of u. Thus,
we employ the reverse disclosure of one-way key chain K
A
u
as in LEAP.
Just prior to T
min
of x, pure node x broadcasts its neighbor
list. Each receiving neighbor u of x checks whether the
receiving time at u is prior to
T
x
in the neighbor table. If
yes, the neighbor list of x is now certified by each neighbor u.
In summary, through the proposed three-way handshaking
neighbor discovery process, pure node u identifies each
immediate neighbor v and v's certified neighbor list (if v is
an adult node), and keeps track of T
min
of v. Just prior
to T
min
of u, node u broadcasts its direct neighbor list so
that every neighbor of u accepts the list as verifiable. Then,
node u becomes an adult node. After that, if newly-added
node x initiates neighbor discovery with adult node u, node
u identifies pure node x, keeps track of T
min
of x, provides
u's certified neighbor list to x, and, in return, takes one-time
certificates from x. Node u then broadcasts these one-time
6
One-time certificate, for instance M AC(K
v
, x|u), assures
v that x is an immediate neighbor of u. It is generated by
pure node x with master key of v.
Table 1: An example of the Neighbor Table of u.
Neighbor ID
Certificate
Verified Neighbor List
v
M AC(K
v
, v|u)
u, w, t
w
M AC(K
w
, w|u)
u, v, z
x
M AC(K
x
, x|u)
u, r, q
certificates, in order to assure u's existing neighbors of the
discovery of new neighbor x. Thus, every time adult node u
discovers newly-added node x through three-way handshaking
, node u informs (by broadcasting) its existing neighbors
of the discovery of new neighbor x. Also, whenever receiving
neighbor list information from pure neighbor x, node u
checks whether the receiving time at u is prior to
T
x
in the
neighbor table. If yes, u now accepts the neighbor list of x
as verifiable.
Through the above neighbor list verification in the bootstrapping
phase, every node gets the knowledge of its neighbors'
certified neighbors. Our Neighbor Watch System makes
use of this information to prevent blind letter attack. With
this knowledge, watch nodes are able to check whether the
relaying packet's intended next-hop is a verified neighbor of
the forwarding node.
3.3
Neighbor Table Maintenance
The information obtained through neighbor list verification
(e.g. its direct neighbors, corresponding certificates,
neighbors' neighbor lists, etc) is stored in the neighbor table
of each node. Table 1 shows an example of the neighbor
table of node u. In densely-deployed sensor networks, the
expected degree of a node is high. However, in this example,
for simplicity, node u has only three neighbors v, w, and x
as in Figure 3.
The entries in the neighbor table are accessed and maintained
with immediate neighbor IDs. For example, if node
u overhears the packet sent from w to v, node u begins to
listen in on v's traffic as a sub-watch node (since the neighbor
table of u has both v's and w's entries in it). Unless v
forwards the packet to a node of the Verified Neighbor List
in v's entry by a certain timeout, sub-watch node u will forward
the packet to its next-hop other than v; many existing
routing protocols [5, 18, 21, 27, 37, 43] enable each node to
maintain multiple potential next-hop. Once forwarding the
packet, sub-watch node u becomes a primary-watch node
and begins to listen in on its next-hop's traffic as described
above.
If newly-added node y initiates the three-way handshaking
with u, node u provides its neighbor list to y by sending
certificates in the neighbor table. Node u, in return from
node y, takes the certificate for y and one-time certificates
for u's existing neighbors. Then, node u stores the certificate
in the new entry for y. However, node u does not store the
one-time certificates but broadcasts them to its neighbors.
If new neighbor y broadcasts its neighbor list within T
min
,
node u stores the list in the entry for y.
If node u is compromised, not only cryptographic key
information but also certificates in the neighbor table are
exposed. However, the attacker cannot misuse these certificates
for other purposes. Since a certificate only attests
neighborship between two specific nodes, it cannot be applied
to any other nodes. In fact, it can be made even public.
However, colluding nodes can deceive a pure node anyway,
64
by fabricating a bogus certificate. We will describe this limitation
in Section 4.4.
EVALUATION
In this section, we evaluate the communication and storage
cost, and analyze the security of our resilient forwarding
scheme (Neighbor Watch System) as well as Neighbor List
Verification. We then present the simulation results of our
forwarding scheme.
4.1
Communication Cost
Unlike the previously proposed diffusion-based reliable-forwarding
schemes [21, 29, 39, 42] that exploit a large number
of nodes to deliver a single packet, our scheme requires
only the designated next-hop node to relay the packet, under
the supervision of watch nodes. We note that, like overhearing
by watch nodes in our scheme, those diffusion-based
schemes require each node to listen to all its neighbors, since
they forward a packet by broadcasting with no designated
next-hop. With a smaller number of relaying nodes, our
scheme makes a report successfully reach the base station.
Thus, the average communication cost of our forwarding
scheme for delivery of a single packet is smaller than those
of the previous schemes.
Our neighbor list verification during the bootstrapping
phase requires the three-way handshaking neighbor discovery
. Unlike the neighbor discovery between two pure nodes,
the size of the messages exchanged between a pure and an
adult node varies with the degree of the adult node. A large
number of certificates caused by the high degree can be overburdensome
to a single TinyOS packet which provides 29
bytes for data. Considering 8-byte certificates and a 4-byte
7
message authentication code (MAC), the adult node is able
to include at most two neighbors' information in a single
TinyOS packet. Thus, when the entire neighbor list cannot
be accommodated within a single packet, the node should
allot the list to several packets and send them serially. In a
network of size N with the expected degree d of each node,
the average number of packets invoked by a newly-added
node per each node is nearly (d - 1)
2
/2(N - 1).
Therefore, as node density d grows, the total number
of packets transmitted from adult nodes to a newly-added
node increases. However, neighbor discovery between a pure
and an adult node occurs much less than between two pure
nodes, since most neighbor discoveries throughout the network
are between two pure nodes in the early stage of the
network. Neighbor discovery between a pure and an adult
node occurs generally when a new node is added to the network
.
4.2
Storage Overhead
In LEAP, each node keeps four types of keys and a manageable
length of hash chain, which is found to be scalable.
In our scheme, each node needs to additionally store its direct
neighbors' certificates and their respective neighbor lists
as in Table 1. Thus, for a network of the expected degree
d and the byte size l of node ID, the additional storage requirement
for each node is d (8 + ld) bytes.
Although our storage requirement for these neighbor lists
is O(d
2
), for a reasonable degree d, memory overhead does
7
4-byte MAC is found to be not detrimental in sensor networks
as in TinySec [24] which employs 4-byte MAC.
u
v
?
u
v
?
C
1
C
2
Figure 4: Examples of critical area C
1
and C
2
.
not exceed 1 KB (a Berkeley MICA2 Mote with 128 KB
flash memory and 4 KB SRAM). For example, when d = 20
and l = 2, a node needs 960 bytes of memory to store such
information.
If node density of a network is so high that the required
space for those neighbor lists significantly increases and the
storage utilization becomes an issue, we can employ a storage-reduction
technique such as Bloom filter [2]. For example,
when d = 30 and l = 2, a node requires 2,040 bytes of additional
space mainly for the neighbor lists. Instead of storing
neighbors' neighbor lists, applying each of the neighbor lists
(480 bits) to a Bloom filter (of 5 hash functions mapping to
a 256 bit vector), a node needs the reduced space of 1,200
bytes for such information (with the false positive probability
= 0.02).
4.3
Resilience to Packet-Dropping Attacks
In face of maliciously packet-dropping nodes, the higher
degree of multipath we provide, the more resiliency our
scheme achieves against such attacks. The average degree
of multipath depends on the number of sub-watch nodes
around a packet-dropping node. Sub-watch nodes should
be located in the region within the communication range of
both forwarding node u and designated next-hop v. We refer
to such a region as critical area. As in Figure 4, if nodes
u and v are located farther away, the size of critical area C
2
gets smaller than that of C
1
, and the probability (p
c
) that
at least one sub-watch node exists in the critical area goes
down. The probability (p
c
) is
p
c
= 1
- (1 - c)
d-1
,
where c is the ratio of the critical area size to the node's communication
range, and the expected degree d of the node.
To determine the appropriate degree d, we set the smallest
critical area C
2
in Figure 4 as a lower bound case (c = 0.4).
Figure 5 shows that, even in the lower bound critical area,
with d = 6 and d = 10, probability p
c
is above 0.9 and above
0.99, respectively.
Since, in a network of degree d, the probability that there
exist m sub-watch nodes in the critical area of the ratio c is
p(m) =
d - 1
m
c
m
(1
- c)
d-m-1
,
the expected number of sub-watch nodes, m, in the critical
area is given by
E[m] = (d - 1)c.
Thus, in the lower bound (c = 0.4) critical area, when d =
10, 15, 20, the number of sub-watch nodes (i.e. the degree
of multipath) is 3.6, 5.6, 7.6 on average, respectively. This
65
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1
5
10
15
20
Degree of a node
P
r
ob
ab
i
l
i
t
y
p
c
Figure 5:
Probability (p
c
) that at least one sub-watch
node exists in the lower bound (c = 0.4) critical
area.
shows that the higher degree of each node has, our scheme
has the higher degree of multipath and resiliency against
packet-dropping nodes.
4.4
The Security of Neighbor List Verification
Our Neighbor List Verification(NLV) keeps the nice properties
of LEAP. Adult nodes fail to establish pairwise keys
with any adult nodes in arbitrary locations, so that the impact
of a node compromise is localized. NLV performs the
three-way handshaking neighbor discovery, instead of two-message
exchange in LEAP. The three-way handshaking enables
each node to verify not only its direct neighbors but
also their respective neighbor lists.
Moreover, this this three-way handshaking can be a potential
solution to deal with irregularity of radio range [15,
37, 45]. In reality, due to the noise and some environmen-tal
factors, radio range of each node is not exactly circular
. So, communication links among nodes are asymmetric;
node u can hear node v which is unable to hear u. With
two-message exchange, only the node initiating the neighbor
discovery is assured of the link's bidirectionality. By the
three-way handshaking, both of neighbors can be assured of
their symmetric connectivity.
With NLV, only the verified lists are stored and utilized
for our packet-forwarding scheme. NLV verifies the neighbor
list of an adult node with certificates.
These certificates
merely attest neighborship between two specific nodes. Even
if a node is compromised, the attacker fails to abuse the
certificates of the captured node for other purpose.
However, collusion among compromised nodes can fabricate
bogus certificates in order to deceive a newly-added
node. For example, consider two colluding nodes u and v at
the different locations. When compromised node u discovers
newly-added node x, node u provides x with u's neighbor
list (maliciously including v in it). Even though node v is
not an actual neighbor of u, colluding node v can generate
the bogus certificate for u, M AC(K
v
, v|u). Then, x falsely
believes that v is a direct neighbor of u. This attack, however
, affects only the one newly-added node x. Thus, when
compromised node u tries to launch the blind letter attack
8
,
8
Compromised node u transmits the relaying packet with its
other surrounding adult neighbors of u can still detect it
anyway.
The more serious case is that colluding nodes exploit a
newly-added node to generate bogus one-time certificates.
For example, consider two colluding nodes u and v that
share all their secret information as well as all their certificates
. When newly-added node x initiates the three-way
handshaking with u, compromised node u pretends to be
v and provides x with v' neighbor list. Then, x in return
provides u with one-time certificates for each neighbor of
v; these one-time certificates falsely attest that v has new
neighbor x. Node u sends this information to v over the
covert channel. Then, v broadcasts these one-time certificates
, and neighbors of v falsely believe that x is a direct
neighbor of v.
Unfortunately, we do not provide a proper countermeasure
to defend against this type of man-in-the-middle attacks
. However, we point out that this type of attacks has
to be launched in the passive manner. The adversary has
to get the chance of discovery of a newly-added node. In
other words, compromised nodes wait for the initiation of
the three-way handshaking from a newly-added node. Since
the attacker does not know where the new nodes will be
added, it has to compromise a sufficient number of legitimate
nodes in order to increase the probability of discovery
of newly-added nodes.
As an active defense against such man-in-the-middle attacks
, we can apply a node replication detection mechanism
such as Randomized or Line-Selected Multicast [31], which
revokes the same ID node at the different location claims.
To successfully launch such man-in-the-middle attacks, two
colluding nodes should pretend to be each other so that each
of them claims to be at two different locations with the same
ID. Location-binding key-assignment scheme by Yang et al.
[39] with a little modification also can be a good solution
to such attacks. Since it binds secret keys with nodes' geographic
locations, the key bound to the particular location
cannot be used at any arbitrary locations. Adopting this,
NLV can check whether the claimed neighbors are really located
within geographically two hops away.
4.5
Simulations
To further evaluate the performance of our resilient forwarding
scheme, we run simulations of our scheme in the
presence of packet-dropping nodes on a network simulator,
ns-2 [9].
4.5.1
Simulation Model
In our simulations, we deploy N sensor nodes uniformly at
random within 500
500m
2
target field, with N = 300 and
600. Each sensor node has a constant transmission range of
30m, so that the degree of each node is approximately 10
(N = 300) and 20 (N = 600) on average. We position a base
station and a source node in opposite corners of the field, at
a fixed point (50, 50) and (450, 450), respectively. They are
located approximately 18 hops away from each other.
We distribute compromised nodes over an inner square
area with 200m each side (from 150m to 350m of each side
of the 500
500m
2
target area). Thus, compromised nodes
are strategically-placed in between the base station and the
source node. In the simulations, those compromised nodes
drop all the relaying packets.
next-hop id as v, so that x considers it forwarded correctly.
66
( 300 nodes )
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
5
10
15
20
25
30
35
40
45
50
Number of Packet-dropping Nodes
S
u
cc
ess R
a
t
i
o
Single Path Forwarding
with NWS
(a) Success ratio (N = 300, x = 0 50)
( 600 nodes )
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
10
20
30
40
50
60
70
80
90
100
Number of Packet-dropping Nodes
S
u
cc
ess R
a
t
i
o
Single Path Forwarding
with NWS
(b) Success ratio (N = 600, x = 0 100)
( 300 nodes )
0
10
20
30
40
50
60
70
80
90
100
0
5
10
15
20
25
30
35
40
45
50
Number of Packet-Dropping Nodes
N
u
m
b
e
r
of
R
e
l
a
yi
n
g
N
o
d
e
s
.
Single Path Forwarding
with NWS
(c) The number of relaying nodes with N = 300
( 600 nodes )
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
Number of Packet-dropping Nodes
N
u
m
b
e
r
of
R
e
l
a
yi
n
g
N
o
d
e
s
.
Single Path Forwarding
with NWS
(d) The number of relaying nodes with N = 600
Figure 6: Simulation Results (averaged over 100 runs).
We use the typical TinyOS beaconing [17] with a little
modification as a base routing protocol in our simulations.
We add a hop count value in a beacon message
9
. To have
multiple potential next-hops, when receiving a beacon with
the same or better hop count than the parent node's, each
node marks the node sending the beacon as a potential next-hop
.
Each simulation experiment is conducted using 100 different
network topologies, and each result is averaged over 100
runs of different network topologies.
4.5.2
Simulation Results
In the presence of compromised node dropping all the relaying
packets, we measure the success ratio (i.e. the percentage
of the packets that successfully reach the base station
from the source) and the number of relaying nodes by
the primitive single-path forwarding and with NWS in a
network of size N, with N = 300 and 600.
9
The base station initiates the beacon-broadcasting, which
floods through the network, in order to set up a routing tree.
Figure 6(a) shows the success ratio in face of x packet-dropping
nodes (varying x=0 to 50) in a 300-sensor-node
network with the approximate degree d = 10. Although
the success ratio gently decreases with x, it keeps up above
0.8 even with x = 30, with the help of NWS. This tendency
of decreasing success ratio can be attributed to the
degree d = 10 (3.6 sub-watch nodes on average) as well as
an increasing number of packet-dropping nodes. Due to the
strategically-placement of compromised nodes in our simulations
, as x increases on, it is likely that a forwarding
node's all potential sub-watch nodes themselves are packet-dropping
nodes.
Figure 6(c) shows the number of nodes
that relay the packet from the source to the base station
in the same experiments. Since the source is located about
18 hops away from the base station, the number of relaying
nodes only with the single-path forwarding remains at 18.
With NWS, the number of relaying nodes increases with x,
in order to bypass an increasing number of packet-dropping
nodes. In face of such nodes, our scheme converts single-path
forwarding into multipath data forwarding, with the
67
help of sub-watch nodes around such packet-dropping nodes.
Utilizing a cache for recently-received packets can suppress
the same copy within a certain timeout, which reduces the
number of relaying nodes.
Figure 6(b) shows the success ratio in a 600-sensor-node
network with the approximate degree d = 20 with x packet-dropping
nodes (varying x=0 to 100). Unlike that with N =
300, the success ratio stays constantly at around 0.99 even
with x = 100, with the help of NWS. This tendency of high
success ratio can be mainly attributed to the degree d = 20
(7.6 sub-watch nodes on average in the lower bound case),
which is found to be high enough to bypass a large number
of packet-dropping nodes. Figure 6(d) shows the number
of relaying nodes from the source to the base station in the
same experiments. With NWS, the increase in the number
of relaying nodes with x is more conspicuous than that with
N = 300, since more than twice as many as sub-watch nodes
help forward the packets so that it can bypass a large number
of packet-dropping nodes anyway.
In the simulation results, we note that our forwarding
scheme dynamically adjusts its forwarding style, depending
on the number of packet-dropping nodes en-route to the base
station. As in Figures 6(c) and 6(d), while there exist none
or a small number of packet-dropping nodes on the way, our
scheme works almost like the single-path forwarding with
the help of a few additional relaying nodes. On the other
hand, when confronting a large number of packet-dropping
nodes, our scheme makes full use of the help from additional
relaying nodes, in order to successfully deliver the packet to
the base station at any cost to the best efforts.
CONCLUSIONS AND FUTURE WORK
In this paper we focus on defending against compromised
nodes' dropping of legitimate reports. We have presented
a resilient packet-forwarding scheme using Neighbor Watch
System (NWS) against maliciously packet-dropping nodes in
sensor networks. In face of such nodes, NWS is specifically
designed for hop-by-hop reliable delivery, and the prompt
reaction of the conversion from single-path to multipath forwarding
augments the robustness in our scheme so that the
packet successfully reach the base station.
In future work, we plan on further improving NLV to defend
against the man-in-the-middle attacks, collusion among
compromised nodes. Such attacks can be prevented by using
a master key derived with not only a node ID but also its
geographic information. We will also seek to address O(d
2
)
storage requirement for the neighbors' neighbor lists. Finally
, we would like to perform an intensive experimental
evaluation to compare our scheme with other reliable delivery
protocols [10, 29, 42].
ACKNOWLEDGMENTS
This work was supported by grant No.R01-2006-000-10073-0
from the Basic Research Program of the Korea Science and
Engineering Foundation.
REFERENCES
[1] R. Anderson, H. Chan, and A. Perrig, Key Infection:
Smart Trust for Smart Dust, IEEE ICNP 2004
[2] Burton H. Bloom, Space/Time Trade-offs in Hash
Coding with Allowable Errors, Communication of the
ACM, vol. 13, 422-426, 1970
[3] B. Carbunar, I. Ioannidis, and C. Nita-Rotaru,
JANUS: Towards Robust and Malicious Resilient
Routing in Hybrid Wireless Networks, ACM workshop
on Wireless security (WiSe'04), Oct. 2004
[4] H. Chan, A. Perrig, and D. Song, Random Key
Predistribution Schemes for Sensor Networks, IEEE
Symposium on Security and Privacy, pp. 197-213, May
2003.
[5] B. Deb, S. Bhatnagar, and B. Nath, ReInForM:
Reliable Information Forwarding Using Multiple Paths
in Sensor Networks, IEEE Local Computer Networks
(LCN 2003), pp. 406-415, Oct. 2003.
[6] J. Deng, R. Han, and S. Mishra, A Performance
Evaluation of Intrusion- Tolerant Routing in Wireless
Sensor Networks, 2nd International Workshop on
Information Processing in Sensor Networks (IPSN 03),
pp. 349-364, Apr. 2003.
[7] J. Deng, R. Han, and S. Mishra, Intrusion Tolerance
and Anti-Traffic Analysis Strategies for Wireless
Sensor Networks, IEEE International Conference on
Dependable Systems and Networks (DSN), pp.
594-603, 2004.
[8] J. Deng, R. Han, and S. Mishra, Defending against
Path-based DoS Attacks in Wireless Sensor Networks,
ACM Workshop on Security of Ad-Hoc and Sensor
Networks (SASN'05) , Nov, 2005.
[9] K. Fall and K. Varadhan (editors), NS notes and
documentation, The VINT project, LBL, Feb 2000,
http://www.isi.edu/nsnam/ns/
[10] D. Ganesan, R. Govindan, S. Shenker, and D. Estrin,
Highly Resilient, Energy-Efficient Multipath Routing
in Wireless Sensor Networks, Computing and
Communications Review (MC2R) Vol 1., pp. 11-25,
2002.
[11] V. D. Gligor, Security of Emergent Properties in
Ad-Hoc Networks, International Workshop on Security
Protocols, Apr. 2004.
[12] O. Goldreich, S. Goldwasser, and S. Micali, How to
Construct Random Functions, Journal of the ACM,
Vol. 33, No. 4, 210-217, 1986
[13] L. Eschenauer and V. D. Gligor, A Key-Management
Scheme for Distributed Sensor Networks, 9th ACM
Conference on Computer and Communication
Security (CCS), pp. 41-47, Nov. 2002.
[14] C. Hartung, J. Balasalle, and R. Han, Node
Compromise in Sensor Networks: The Need for Secure
Systems, Technical Report CU-CS-990-05,
Department of Computer Science University of
Colorado at Boulder, Jan. 2005
[15] T. He, S. Krishnamurthy, J. A. Stankovic, T. F.
Abdelzaher, L. Luo, R. Stoleru, T. Yan, L. Gu, J. Hui,
and B. Krogh, An Energy-Efficient Surveillance
System Using Wireless Sensor Networks, ACM
MobiSys'04, June, 2004
[16] W.R. Heinzelman, J. Kulik, H. Balakrishnan, Adaptive
Protocols for Information Dissemination in Wireless
Sensor Networks, ACM MobiCom99, pp. 174.185,
1999.
[17] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and
K. Pister, System Architecture Directions for
Networked Sensors, ACU ASPLOS IX, November
2000.
68
[18] X. Hong, M. Gerla, W. Hanbiao, and L. Clare, Load
Balanced, Energy-Aware Communications for Mars
Sensor Networks, IEEE Aerospace Conference, vol.3,
1109-1115, 2002.
[19] Y.-C. Hu, D. B. Johnson, and A. Perrig, SEAD:
Secure Efficient Distance Vector Routing for Mobile
Wireless Ad Hoc Networks, IEEE Workshop on Mobile
Computing Systems and Applications, pp. 3-13, Jun.
2002.
[20] Y.-C. Hu, A. Perrig, and D. B. Johnson, Efficient
Security Mechanisms for Routing Protocols, NDSS
2003, pp. 57-73, Feb. 2003.
[21] C. Intanagonwiwat, R. Govindan and D. Estrin,
Directed Diffusion: A Scalable and Robust
Communication Paradigm for Sensor Networks,
MobiCom'00, Aug. 2000.
[22] D. Johnson, D.A. Maltz, and J. Broch, The Dynamic
Source Routing Protocol for Mobile Ad Hoc Networks
(Internet-Draft), Mobile Ad-hoc Network (MANET)
Working Group, IETF, Oct. 1999.
[23] C. Karlof and D. Wagner, Secure Routing in Wireless
Sensor Networks: Attacks and Countermeasures, The
First IEEE International Workshop on Sensor Network
Protocols and Applications, pp. 113-127, May 2003
[24] C. Karlof, N. Sastry, and D. Wagner, TinySec: A Link
Layer Security Architecture for Wireless Sensor
Networks, ACM SensSys'04, pp. 162-175, Nov. 2004.
[25] I. Khalil, S. Bagchi, and C. Nina-Rotaru, DICAS:
Detection, Diagnosis and Isolation of Control Attacks
in Sensor Networks, IEEE SecureComm 2005, pp. 89 100
, Sep. 2005
[26] Y. Liu and W. K.G. Seah, A Priority-Based
Multi-Path Routing Protocol for Sensor Networks,
15th IEEE International Symposium on Volume 1, 216
- 220, 2004
[27] S.-B. Lee and Y.-H. Choi, A Secure Alternate Path
Routing in Sensor Networks, Computer
Communications (2006),
doi:10.1016/j.comcom.2006.08.006.
[28] S. Marti, T.J. Giuli, K. Lai, and M. Baker, Mitigating
Routing Misbehavior in Mobile Ad Hoc Networks,
ACM/IEEE International Conference on Mobile
Computing and Networking, pp. 255-265, 2000
[29] H. Morcos, I. Matta, and A. Bestavros, M
2
RC:
Multiplicative-Increase/Additive-Decrease Multipath
Routing Control for Wireless Sensor Networks, ACM
SIGBED Review, Vol. 2, Jan 2005.
[30] J. Newsome, E. Shi, D. Song, and A. Perrig, The Sybil
Attack in Sensor Networks: Analysis and Defenses,
IEEE IPSN'04, pp. 259-268, Apr. 2004.
[31] B. Parno, A. Perrig, and V. D. Gligor, Distributed
Detection of Node Replication Attacks in Sensor
Networks, the 2005 IEEE Symposium on Security and
Privacy, pp. 49-63, May 2005.
[32] A. Perrig, R. Szewczyk, V. Wen, D. Culler, and
J. Tygar, SPINS: Security Protocols for Sensor
Networks, ACM MobiCom'01, pp. 189-199, 2001.
[33] A. Perrig, J. Stankovic, and D. Wagner, Security in
Wireless Sensor Networks, Communications of the
ACM, 47(6), Special Issue on Wireless sensor
networks, pp.53- 57, Jun. 2004
[34] B. Przydatek, D. Song, and A. Perrig, SIA: Secure
Information Aggregation in Sensor Networks, 1st
International Conference on Embedded Networked
Sensor Systems, 255-256, 2003
[35] E. Shi and A. Perrig, Designing Secure Sensor
Networks, Wireless Communications, IEEE Volume
11, Issue 6, pp. 38-43, Dec. 2004.
[36] D. Tian and N.D. Georganas, Energy Efficient
Routing with Guaranteed Delivery in Wireless Sensor
Networks, IEEE Wireless Communications and
Networking (WCNC 2003), IEEE Volume 3, 1923 1929
, March 2003
[37] A. Woo, T. Tong, and D. Culler, Taming the
Underlying Challenges of Reliable Multhop Routing in
Sensor Networks, ACM SenSys03, Nov, 2003
[38] A. Wood and J. Stankovic, Denial of Service in Sensor
Networks, IEEE Computer, Vol.35, 54-62, Oct. 2002
[39] H.Yang, F. Ye, Y. Yuan, S. Lu and W. Arbough,
Toward Resilient Security in Wireless Sensor
Networks, ACM MobiHoc'05, 34-45, May 2005
[40] Y. Yang, X. Wang, S. Zhu, and G. Cao SDAP: A
Secure Hop-by-Hop Data Aggregation Protocol for
Sensor Networks, ACM MobiHoc'06 May 2006
[41] F. Ye, H. Luo, S. Lu and L. Zhang, Statictial En-route
Filtering of Injected False Data in Sensor Networks,
IEEE INFOCOM, 2004
[42] F. Ye, G. Zhong, S. Lu and L. Zhang, GRAdient
Broadcast: A Robust Data Delivery Protocol for Large
Scale Sensor Networks, ACM Wireless Networks
(WINET), March 2005
[43] Y. Yu, R. Govindan, and D. Estrin, Geographical and
Energy Aware Routing: a recursive data dissemination
protocol for wireless sensor networks, UCLA
Computer Science Department Technical Report
UCLA/CSD-TR-01-0023, May 2001.
[44] W. Zhang and G. Cao, Group Rekeying for Filtering
False Data in Sensor Networks: A Predistribution and
Local Collaboration-Based Approach, IEEE
INFOCOM'05. Vol. 1, 503-514, March 2005
[45] G. Zhou, T. He, S. Krishnamurthy, and J. A.
Stankovic, Impact of radio irregularity on wireless
sensor networks, the 2nd International Conference on
Mobile Systems, Applications, and Services
(MobiSys04), June, 2004
[46] S. Zhu, S. Setia, and S. Jajodia, LEAP: Efficient
Security Mechanisms for Large-Scale Distributed
Sensor Networks, The 10th ACM Conference on
Computer and Communications Security (CCS '03),
62-72, 2003
[47] S.Zhu, S. Setia, S. Jajodia, and P. Ning, An
Interleaved Hop-by-Hop Authentication Scheme for
Filtering False Data in Sensor Networks, IEEE
Symposium on Security and Privacy, 2004
69 | Neighbor Watch System;legitimate node;Reliable Delivery;Packet-dropping Attacks;aggregation protocols;malicious node;robustness;critical area;single-path forwarding;Sensor Network Security;cluster key;secure ad-hoc network routing protocol;Secure Routing;degree of multipath |
180 | SIMULATING OPTION PRICES AND SENSITIVITIES BY HIGHER RANK LATTICE RULES | In this paper we introduce the intermediate rank or higher rank lattice rule for the general case when the number of quadrature points is n t m, where m is a composite integer, t is the rank of the rule, n is an integer such that (n, m) = 1. Our emphasis is the applications of higher rank lattice rules to a class of option pricing problems. The higher rank lattice rules are good candidates for applications to finance based on the following reasons: the higher rank lattice rule has better asymptotic convergence rate than the conventional good lattice rule does and searching higher rank lattice points is much faster than that of good lattice points for the same number of quadrature points; furthermore, numerical tests for application to option pricing problems showed that the higher rank lattice rules are not worse than the conventional good lattice rule on average. | Introduction
It is well known in scientific computation that Monte Carlo
(MC) simulation method is the main method to deal with
high dimensional ( 4) problems. The main drawback for
this method is that it converges slowly with convergence
rate O(
1
N
), where N is the number of points (or samples
or simulations), even after using various variance reduction
methods. To speed it up, researchers use quasi-random
or low-discrepancy point sets, instead of using pseudo-random
point sets. This is the so called quasi-Monte Carlo
(QMC) method.
There are two classes of low-discrepancy sequences
(LDS). The first one is constructive LDS, such as Halton's
sequence, Sobol's sequence, Faure's sequence, and Nieder-reiter's
(t, m, s)-nets and (t, s)-sequence. This kind of
LDS has convergence rate O(
(log N )
s
N
), where s is the dimension
of the problem, N is, again, the number of points.
The second class is the integration lattice points, for example
, good lattice points (GLP). This type of LDS has
convergence rate O(
(log N )
s
N
), where > 1 is a parameter
related to the smoothness of the integrand, s and N are the
same as above. The monograph by Niederreiter [1] gives
very detailed information on constructive LDS and good
lattice points, while the monograph by Hua and Wang [2]
and Sloan and Joe [3] describe good lattice rules in detail.
Unlike the constructive sequences, the construction of
good lattice points is not constructive in the sense that they
could be found only by computer searches (except in the
2- dimensional case, where good lattice points can be constructed
by using the Fibonacci numbers). Such searches
are usually very time consuming, especially when the number
of points is large or the dimension is high, or both.
Therefore, to develop algorithms which can be used in finding
good lattice points fast is of practical importance.
This paper discusses the applications of the intermediate
rank or higher rank lattice rules (HRLR) to option
pricing problems. The motivations of using higher rank
lattice points are as follows. For a class of finance problems
, we found that using the randomized good lattice
points (GLP) can reach much better convergence than the
randomized constructive quasi-random sequences (such as,
Sobol sequence), let alone the pseudo-random point sets,
see [4] about this (in that paper, the lattice points were
taken from [2]). The theory given in Section 2 shows that
the error bound of a higher rank lattice rule is smaller than
that of a good lattice rule, at least asymptotically. And
searching higher rank lattice points is much faster than
searching good lattice points. Our extensive numerical results
confirmed this fact. Some results are listed in Section
2. Furthermore, the results in Section 3 showed that
the standard errors of the randomized higher rank lattice
points are smaller than those of the randomized good lattice
points (most of the times), which are much smaller than the
standard errors of the randomized Sobol sequence, when
these quasi-random point sets are applied to some financial
derivative pricing problems in simulating option values and
sensitivities.
Higher Rank Lattice Rules
Detailed information about lattice rules can be found in the
literature, such as [1], [2] and [3]. We start to introduce
lattice rules briefly by considering an integral
530-113
258
If =
C
s
f (x)dx,
(1)
where C
s
= [0, 1]
s
is the s-dimensional unit hyper-cube
, f (x) is one-periodic in each component of x, i.e.
f (x) = f (x + z), z Z
s
(the set of s- dimensional
integer points), x R
s
(the s-dimensional real space).
An s-dimensional integration lattice L is a discrete subset
of R
s
that is closed under addition and subtraction and
contains Z
s
as a subset. A lattice rule for (1) is a rule of the
form
Qf = 1
N
N -1
j=0
f (x
j
),
(2)
where {x
0
, , x
N -1
} L U
s
with U
s
= [0, 1)
s
, N is
called the order of the rule.
Now we consider the intermediate rank or higher rank
lattice rules, i.e. rules of the form
Q
t
f =
1
n
t
m
n-1
k
t
=0
n-1
k
1
=0
m-1
j=0
f ({ j
m g+
k
1
n y
1
++ k
t
n y
t
})
(3)
for 1 t s, where (m, n) = 1 and g, y
1
, , y
t
Z
s
. Notice that t = 0 or t = 1 and n = 1 in (3) is just
the conventional good lattice points rule (we refer it to the
rank-1 rule in this paper). Under some conditions (see, for
example, Theorem 7.1, [3]) on g, y
1
, , y
t
, the points in
(3) are distinct, so that Q
t
is a lattice rule of order N =
n
t
m, and it has rank t.
Korobov (1959) gave the first existence of good lattice
points in the case where N is a prime number. Niederreiter
(1978) extended the existence to general number N . Disney
and Sloan proved the existence and obtained the best
asymptotic convergence rate for general N in good lattice
points case. The existence of good rank t rules can be established
, but much more complicated. We introduce
Definition 1. For any integer N 2, let G = G(N ) =
{g = (g
1
, , g
s
) Z
s
, (g
j
, N ) = 1 and -N/2 < g
j
N/2, 1 j s}. Let y
1
, , y
t
Z
s
be fixed. The mean
of P
(Q
t
) over G is
M
(n)
,t
(m) =
1
Card(G)
gG
P
(Q
t
), > 1
(4)
For the sake of simplicity, Sloan et al chose the special
form of y
j
with all the components 0 except the jth which
is 1 - the so-called copying rule. Thus (3) becomes
Q
t
f =
1
n
t
m
n-1
k
t
=0
n-1
k
1
=0
m-1
j=0
f ({ j
m g+
(k
1,
, k
t
, 0, , 0)
n
}).
(5)
With this choice, P
(Q
t
) is easily calculated as follows.
For > 1, 1 t s and n 2, define
f
(n)
,t
(x) = (
t
j=1
F
(n)
(x
j
))
s
k=t+1
F
(x
k
),
(6)
where
F
(n)
(x) = 1 + 1
n
hZ
| h |
e
(hx),
and
F
(x) = 1 +
hZ
| h |
e
(hx).
If Q
(n)
t
f is the m-point lattice rule defined by
Q
(n)
t
f = 1
m
m-1
j=0
f ({ jn
m g
1
, , jn
m g
t
, j
m g
t+1
, , j
m g
s
}),
(7)
then
P
(Q
t
) = Q
(n)
t
f
(n)
,t
- 1.
(8)
For 1
t
s and g = (g
1
, , g
s
)G,
denote
w = (ng
1
, , ng
t
, g
t+1
, , g
s
),
and
r
t
(h) =(
t
j=1
r(nh
j
))
s
k=t+1
r(h
k
),
h =(h
1,
, h
s
). Then applying the rank-1 lattice rule with
generating vector w, we have
P
(Q
t
) =
hw0 (mod m)
r
t
(h)
-1.
(9)
The existence of good rank-t rules and the error
bounds for prime m was established by Joe and Sloan (Theorem
7.4, [3]). The corresponding results for general m
were discovered and proved in [5], and is stated below.
Theorem 1. For > 1, 1 t s, n 1 integer, m > 0
any integer with (n, m) = 1, then
M
(n)
,t
(m) = 1
m
t
k=0
s-t
l=0
(
t
k
)(
s-t
l
) (2())
n
k
k+l
p|m
F
,k+l
(p ) - 1,
(10)
where the product is over all prime factors p of m, p is the
highest power of p dividing m,
p|m
F
,0
(p ) = m, and
for k 1, F
,k
(p ) is given by
F
,k
(p ) = 1 + (-1)
k
(1 - 1/p
-1
)
k
(1 - 1/p
(k-1)
)
(p - 1)
k-1
(1 - 1/p
k-1
)
.
(11)
Remark:
Using the Binomial Theorem, we can obtain
the result of Theorem 7.4 in [3] (the case when m is
prime) from Theorem 1, since the assumption that m is
prime and n is not a multiple of m implies that (n, m) = 1.
Moreover, the result of Theorem 1 also holds for n = 1
or t = 0. In either case, the right hand side of (10) is just
259
the case of rank-1 in [3], and if n 2 and t = s, then we
obtain the result of maximal rank case in [3].
Now as in the case of rank-1, we give an upper bound
for M
(n)
,t
(m) and hence P
(Q
t
).
Corollary 1. Under the conditions of Theorem 1, we have
M
(n)
,t
(m) 4()
2
(m) [(
s-t
2
) + 1
n
2
(
t
2
) + (s - t)t
n
]
+ 1
m {[a(1 + 2())
s-t
+ b(1 - 2())
s-t
]
+[a(1 + 2()
n
)
t
+ b(1 - 2()
n
)
t
]
+ 1
n
[a(1 + 2())
s
+ b(1 - 2())
s
]},
(12)
where (
s-t
2
) = 1 for s - t < 2, (
t
2
) = 1, for t < 2; a =
(3)
(6)
+
1
2
1.68, and b = a - 1. Hence
M
(n)
,t
(m) = O( log log m
m
), asm .
(13)
Theorem 2. Let (m) = (1 - s/ log m)
-1
. If m >
e
s/(-1)
then there is a g G such that
P
(Q
t
) M
(n)
(m),t
(m)
/(m)
.
(14)
If s 3, then
M
(n)
(m),t
(N )
/(m)
1
n
t
( 2e
s )
s
(log m)
s
m
(15)
as m , where f (x) h(x) as x means
lim
x f (x)
h(x)
= 1.
It is hard to obtain a precise comparison result between
the mean for the case of t = 0, i.e., M
(n)
,t
(m), and
the corresponding mean for the case of t = 0 rule, i.e.,
M
(n
t
m), even when m is prime, as pointed out in [3].
Notice that the number of points for rank t rule is n
t
m, and
we should use the same number of points when comparing
efficiency or convergence rate among different methods
. We give an approximate result on this direction based
on (15) and a result in [3] similar to (15).
Corollary 2.
For > 1, 1 t s, n 1 integer
, m > 0 any integer with (n, m) = 1, let
1
(m) =
(1 - s/ log m)
-1
,
2
(n
t
m) = (1 - s/ log(n
t
m))
-1
. If
m > e
s/(-1)
then
M
(n)
1
(m),t
(m)
/
1
(m)
M
2
(n
t
m)
(n
t
m)
/
2
(n
t
m)
log m
log m + t log n
s
< 1.
(16)
From Corollary 2, we can roughly see that P
(Q
t
) <
P
(Q
1
) for t > 1 with the same order (number of points)
for both rules, at least asymptotically. Our numerical tests
showed that it is true even for small number of points.
Furthermore, higher rank good lattice points can also be
found by computer search via minimizing P
(Q
t
) based
on (8), but using m instead of using n
t
m (as in the case
of good lattice points). Therefore, searching higher rank
lattice points is much faster than searching rank-1 lattice
points.
Usually, the good lattice points were found by searching
Korobov type g = (1, b, b
2
, ..., b
s-1
) mod m (compo-nentwise
) with (b, m) = 1. Sloan and Reztsov proposed
a new searching algorithm - the component-by-component
method. We searched extensively for both types of points.
Based on the search results we found that these two types
of lattice points are comparable in both errors and searching
times for the same rank and the same number of points.
We only report the Korobov type lattice points here limited
to space.
So far as we know, the theory of copying higher rank
lattice rule is valid under the assumption that (n, m) = 1.
We conjecture that this restriction can be relaxed. We are
unable to prove this yet so far. But our numerical results
strongly support our conjecture, see Table 1 (only partial
results are listed). In this table, the comparison is based on
the same number of points, where Kor t0 stands for the Korobov
type lattice points with t = 0 (i.e., rank-1 case), sim-ilarly
for Kor t4. Time is measured in seconds. Whenever
time is zero, it just means that the time used in searching
is less than 0.5 seconds. The CPU times used in searching
may be machine dependent. Dev-C++ was used as our
programming language (run on a laptop under Windows
system). In order to measure CPU time as precise as possible
, all the programs were run on the same machine and
only one program, no any other programs, was run one at
a time on the machine. Our searching results showed that
within the same type of lattice points, the higher the rank,
the smaller the P
2
, and the faster the search. The search
time for rank=4 in the case of number of point =32768 is
about 1 second, those for all the other cases are less than
0.5 seconds.
Table 1
: Computer search results of t = 0 and t = 4, with
(n, m) = 1, n = 2, m = a power of 2, dimension = 5.
Kor t0
Kor t4
2
t
m
b
P
2
Time
b
P
2
1024
189
0.735
0
5
0.373
2048
453
0.264
1
27
0.164
4096
1595
0.121
3
21
0.067
8192
2099
0.048
10
61
0.026
16384
2959
0.018
43
35
0.010
32768
1975
0.007
169
131
0.004
Applications to Option Pricing
Under the Black-Scholes framework, many European options
can be expressed in terms of multivariate normal distributions
. Examples are options on maximum and minimum
of n assets, discrete lookback options, discrete shout
options, discrete partial barrier options, reset options, etc.,
see [6] and the references therein.
In this section, we apply both the Monte Carlo and
the quasi-Monte Carlo methods to applied finance area-260
option pricing, and compare the efficiencies among different
methods. For the quasi-Monte Carlo methods, we
use Sobol sequence and both rank-1 and higher rank lattice
points. The Sobol sequence is usually the best among the
constructive LDS based on our tests.
To compare the efficiencies of different methods, we
need a benchmark for fair comparisons. If the exact value
of the quantity to be estimated can be found, then we use
the absolute error or relative error for comparison. Otherwise
, we use the standard error (stderr) for comparison.
Here stderr =
N
, where
2
is the unbiased sample variance
, N is the sample size. For LDS sequences, we define
the standard error by introducing random shift as follows
. Assume that we estimate = E[f (x)], where x
is an s-dimensional random vector. Let {x
i
}
m
i=1
C
s
be a finite LDS sequence, {r
j
}
n
j=1
C
s
be a finite sequence
of random vectors. For each fixed j, we have a
sequence {y
(j)
i
}
m
i=1
with y
(j)
i
= x
i
+ r
j
. It can be shown
that such a sequence still has the same convergence rate
as the original one. Denote
j
=
1
m
m
i=1
f (y
(j)
i
) and
=
1
n
n
j=1
j
.The unbiased sample variance is
2
=
n
j=1
(
j
-)
2
n-1
=
n
n
j=1
2
j
n
j=1
j
2
(n-1)n
. Then the standard
error is defined by stderr =
n
. The efficiency of a
QMC method (after randomization) over the MC method is
defined as the ratio of the standard error of the MC method
to the standard error of a QMC method (both methods have
the same number of points, otherwise the comparison is not
fair).
As an example, let us consider the computation of call
options on maximum of s assets. Using martingale method,
Dufresne et al derived in [7] that the value of a call option
can be expressed in terms of multivariate normal distributions
:
V = V
s
max
({S
i
}, {
i
},
0
, r, q, ) =
s
i=1
S
i
e
-q
i
T
N
s
(e
i1
, ..., e
i,i-1
, d
(i)
i
(K, T ), e
i,i+1
, e
is
;
i
)Ke
-rT
1 - N
s
(-d
Q
1
(K, T ), ..., -d
Q
s
(K, T );
0
) (17)
where
e
ik
= log(S
i
/S
k
) + T
2
ik
/2
ik
T
,
ik
=
2
i
- 2
i
k
+
2
k
,
d
(i)
i
(K, T ) = log(S
i
/K) + (r +
2
i
/2)T
i
T
,
d
Q
i
(K, T ) = log(S
i
/K) + (r 2
i
/2)T
i
T
,
0
= (
jk
)
ss
and for i = 1, ..., s,
i
= (
(i)
jk
)
ss
with
(i)
jk
=
2
i
+
jk
j
k
ij
i
j
ik
i
k
ij
ik
, j, k = i;
(i)
ik
= i ik
k
ik
, i = k;
ii
= 1.
Thus,
in order to estimate the option values,
we need to estimate the following s-variate normal
distribution H(a, )
=
1
det()(2)
s
a
1
a
s
exp
(1
2
x
t
-1
x)dx, where a = (a
1
, a
2
, ..., a
s
),
- a
i
+ ( i = 1, 2, ..., s), x R
s
,
dx =dx
1
...dx
s
, = (
ij
)
ss
is a positive definite correlation
matrix. Details about the computation of multivariate
normal distributions can be found in [8]. Notice
that after the transformation, the s-dimensional integral
for H(a, ) is transformed into an s - 1 dimensional integral
.
For the numerical demonstration, we consider a call
option on maximum of 6 stocks. In our simulations, each
method was randomly shifted, including the MC method,
so that each method has the same number of points. We
took the number of random shifts to be 10, other parameters
are s = 6, K {$90, $100, $110}, r = 10%, S
i
= $100,
i
= 0.2,
ij
= 0.5, i = j, i, j = 1, ..., 6. Besides the
option values, the option sensitivities or Greek letters
i
=
V
S
i
,
ij
=
2
V
S
i
S
j
, V
i
=
V
i
, =
V
T
and =
V
r
are
very important quantities in financial risk management and
trading. They are usually harder to obtain than the option
values themselves. The results where K = $100 are listed
in the following Tables 2, 3 and 4, and the results where
K = $90 and K = $110 are similar and are omitted here.
In these tables, column 1 contains the numbers of
points, numbers in the MC column are the standard errors,
those in the columns of quasi-Monte Carlo methods are
efficiencies of the corresponding methods over the Monte
Carlo method. Here we do not include the CPU times for
different methods since these programs were run on a main-frame
using UNIX system, and there were many other programs
were also running at the time I ran these programs.
And I think that the CPU times measured in this way are
not precise.
Table 2
: Comparison of estimated call option values and
efficiencies, the option value is $28.81 with standard error
1.5099e-06 obtained by higher rank lattice rule (rank=4)
using 2
14
=16384 points with 10 random shifts. The standard
error is zero in my simulation by the same rule using
2
15
=32768 points with 10 random shifts.
N
MC
Sobol
Kor t0
Kor t4
2
10
0.6832
10.8
219.9
200.5
2
11
0.4834
27.8
666.8
1194.9
2
12
0.3413
39.6
2104.7
371.8
2
13
0.2409
41.2
18129.2
22758.4
2
14
0.1703
110.2
30156.0
112804.8
2
15
0.1206
101.3
253540.7
*
From Table 2, we observed that the randomized lattice
rules achieve much better results than the randomized
Sobol's sequence does, the latter is about 10 to 110 time
more efficient than the MC method. The randomized Korobov
type higher rank lattice points beat the randomized
261
rank-1 lattice points, except when N = 2
10
= 1024 and
2
12
= 4096.
Table 3
:
Comparison of estimated option sensitivity
(Greek letter,
1
in this table) values and efficiencies,
the value of
1
is 0.1898 with standard error 4.4427E-09
obtained by higher rank lattice rule (rank=4) using
2
15
=32768 points with 10 random shifts.
N
MC
Sobol
Kor t0
Kor t4
2
10
0.0151
7.3
135.5
98.1
2
11
0.0107
10.7
330.8
1226.6
2
12
0.0077
9.9
900.7
110.7
2
13
0.0054
19.7
8861.8
9833.5
2
14
0.0038
30.7
24742.9
70692.9
2
15
0.0027
42.8
137025.0
605473.8
Again, the randomized lattice rules are much more
efficient than the randomized Sobol's sequence, the latter
is about 8 to 43 time more efficient than the MC method.
Kor t4 is more efficient than Kor t0 except when N =
2
10
= 1024 and 2
12
= 4096.
Table 4
: Comparison of estimated gamma (
11
=
2
V
S
2
1
)
values and efficiencies, the value of
11
is 0.01631 with
standard error 1.2418e-09 obtained by higher rank lattice
rule (rank=4) using 2
15
=32768 points with 10 random
shifts. N
MC
Sobol
Kor t0
Kor t4
2
10
4.6E-4
8.0
107.6
12.8
2
11
3.3E-4
14.0
118.6
223.7
2
12
2.3E-4
19.6
484.3
30.3
2
13
1.7E-4
22.6
4217.8
4457.5
2
14
1.2E-4
24.1
3108.7
28713.5
2
15
8.2E-5
40.0
108047.3
66385.4
The conclusion is similar to that of Table 3. The
randomized Kor t4 is more efficient than the randomized
Kor t0 except when N = 2
10
= 1024 , 2
12
= 4096 and
2
15
= 32768.
In our simulations, the pseudo random number generator
we used is ran2() in [9]. The periodizing function
used is (x) =
1
2
(2x - sin(2x)) .
Conclusion
In this paper, We introduced the higher rank lattice rules
and gave a general expression for the average of P
(Q
t
) for
higher rank lattice rule over a subset of Z
s
, an upper bound
and an asymptotic rate for higher rank lattice rule. The results
recovered the cases of good lattice rule and maximal
rank rule. Computer search results showed that P
2
s by the
higher rank lattice rule were smaller than those by good
lattice rule, while searching higher rank lattice points was
much faster than that of good lattice points for the same
number of quadrature points. Numerical tests for applications
to an option pricing problem showed that the higher
rank lattice rules (t > 0) usually beat the conventional good
lattice rule (t = 0 case). Both of these rules showed significant
superiority over the Sobol sequence. Our tests (not
listed here) on other types of options showed similar efficiency
gains of higher rank lattice rules over good lattice
rules, though the gains may vary.
Since searching higher rank lattice points is much
faster than that of rank - 1 lattice points (say the rank is
larger than 2); the search algorithm is simple; and the values
of P
2
for higher rank lattice points are smaller than that
for the rank - 1 points; furthermore, (standard) errors obtained
by higher rank lattice rules to practical problems are
not worse than those by the rank - 1 rules on average, the
higher rank lattice rules are good candidates for applications
. One unsolved problem in lattice rules (whether high
rank or not) is the periodizing seems not work well in high
dimensions. It needs futher exploration.
Acknowledgements
This research was partially supported by an Natural
Sciences and Engineering Research Council of Canada
(NSERC) grant.
References
[1] H. Niederreiter, Random Number Generation and
Quasi-Monte Carlo Methods, SIAM, Philadelphia,
1992.
[2] L. Hua and Y. Wang, Applications of Number Theory
in Numerical Analysis, Springer-Verlag, 1980.
[3] I. H. Sloan and S. Joe, Lattice Methods for Multiple
Integration, Oxford University Press, New York, 1994.
[4] P. Boyle, Y. Lai and K. S. Tan, Pricing Options Using
lattice rules, North American Actuarial Journal, 9(3),
2005, 50-76.
[5] Y. Lai, Monte Carlo and Quasi-Monte Carlo Methods
and Their Applications, Ph. D Dissertation, Department
of Mathematics, Claremont Graduate University,
California, USA, 2000.
[6] P. Zhang, Exotic Options, 2nd edition, World Scientific
, 1998.
[7] P.C. Dufresne, W. Keirstead and M. P. Ross, Pricing
Derivatives the Martingale Way, working paper, 1996.
[8] Y. Lai, Effcient Computations of Multivariate Normal
Distributions with Applications to Finance, working
paper, Departmetn of Mathematics, Wilfrid Laurier
University, Waterloo, Ontario, Canada, 2005.
[9] W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P.
Flannery, Numerical recipes in C: The Art of Scientific
Computing, Cambridge University Press, 1992.
262 | Monte Carlo and Quasi-Monte Carlo methods;Simulation of multivariate integrations;Lattice rules;Option Pricing |
181 | SmartCrawl: A New Strategy for the Exploration of the Hidden Web | The way current search engines work leaves a large amount of information available in the World Wide Web outside their catalogues. This is due to the fact that crawlers work by following hyperlinks and a few other references and ignore HTML forms. In this paper, we propose a search engine prototype that can retrieve information behind HTML forms by automatically generating queries for them. We describe the architecture, some implementation details and an experiment that proves that the information is not in fact indexed by current search engines. | INTRODUCTION
The gigantic growth in content present in the World Wide
Web has turned search engines into fundamental tools when
the objective is searching for information. A study in 2000
[11] discovered that they are the most used source for finding
answers to questions, positioning themselves above books,
for example.
However, a great deal of relevant information is still hidden
from general-purpose search engines like AlltheWeb.com
or Google. This part of the Web, known as the Hidden Web
[7], the Invisible Web [6, 9] or the Deep Web [1] is growing
constantly, even more than the visible Web, to which we are
accustomed [6].
This happens because the crawler (the program that is
responsible for autonomous navigating the web, fetching
pages) used by current search engines cannot reach this information
.
There are many reasons for this to occur. The Internet's
own dynamics, for example, ends up making the index of
search engines obsolete because even the quickest crawlers
only manage to access only a small fraction each day of the
total information available on the Web.
The cost of interpretation of some types of files, as for example
Macromedia Flash animations, compressed files, and
programs (executable files) could be high, not compensating
for the indexing of the little, or frequently absent, textual
content. For this reason, that content is also not indexed
for the majority of search engines.
Dynamic pages also cause some problems for indexing.
There are no technical problems, since this type of page generates
ordinary HTML as responses for its requests. However
, they can cause some challenges for the crawlers, called
spider traps [8], which can cause, for example, the crawler to
visit the same page an infinite number of times. Therefore,
some search engines opt not to index this type of content.
Finally, there are some sites that store their content in
databases and utilize HTML forms as an access interface.
This is certainly the major barrier in the exploration of the
hidden Web and the problem that has fewer implemented
solutions. Nowadays none of the commercial search engines
that we use explore this content, which is called the Truly
Invisible Web [9].
Two fundamental reasons make crawling the hidden Web
a non-trivial task [7]. First is the issue of scale. Another
study shows that the hidden content is actually much greater
than what is currently publicly indexed [1]. As well as this,
the interface for access to this information serves through
the HTML forms are projected to be manipulated and filled
by humans, creating a huge problem for the crawlers.
In this paper, we propose a search engine prototype called
SmartCrawl, which is capable of automatically attaining
pages that are not actually recoverable by current search
engines, and that are "secreted" behind HTML forms.
The rest of this article is organised in the following manner
. Section 2 shows related work and in the sequence, we
explain the construction of HTML forms and how they can
be represented. In sections 4 and 5, we describe the prototype
. In Section 6 the experimental results are highlighted
and finally, in Section 7, we conclude the paper.
9
RELATED WORK
There are some proposals for the automatic exploration
of this hidden content. Lin and Chen's solution [6] aims to
build up a catalogue of small search engines located in sites
and, given the user searching terms, choose which ones are
more likely to answer them. Once the search engines are
chosen by a module called Search Engine Selector, the user
query is redirected by filling the text field of the form. The
system submits the keywords and waits for the results that
are combined subsequently and sent to the users' interface.
The HiWE [7] is a different strategy which aims to test
combinations of values for the HTML forms at the moment
of the crawling (autonomous navigation), making the indexing
of the hidden pages possible. Once a form in a HTML
page is found, the crawler makes several filing attempts,
analyses and indexes the results of the obtained pages.
Moreover, the HiWE has a strategy to extract the labels
of HTML forms by rendering the page. This is very useful
to obtain information and classify forms and helps to fill in
its fields.
There are other approaches that focus on the data extraction
. Lage et al. [4] claims to automatically generate agents
to collect hidden Web pages by filling HTML forms.
In addition to this, Liddle et al. [5] perform a more comprehensive
study about form submissions and results processing
. This study focus on how valuable information can
be obtained behind Web forms, but do not include a crawler
to fetches them.
EXTRACTING DATA FROM BEHIND THE FORM
HTML forms are frequently found on the web and are
generally used for filtering a large amount of information.
As shown in Figure 1, from a page with one form the user
can provide several pieces of data which will be passed on
to a process in the server, which generates the answer page.
The current crawlers do not fill in form fields with values,
making them the major barrier for exploration of the hidden
Web. In order to achieve this, it is vital to extract several
pieces of information from the form.
An HTML form can be built on different manners, including
various types of fields such as comboboxes, radio buttons,
checkboxes, text fields, hidden fields and so on. However,
the data sent to the server through the Common Gateway
Interface (CGI) is represented by proper codified pairs
(name, value). This way, we can characterise a form with
which has n fields as a tuple:
F = {U, (N
1
, V
1
), (N
2
, V
2
), ..., (N
n
, V
n
)
}
(1)
where U is the URL for the data that has been submitted,
and (N
n
, V
n
) are the pairs (name, value) [5].
However, this is a simplification, since there are much
more information associated with HTML forms. An example
is the method by which the form data will be sent to the
server, that is, by HTTP GET or POST. Moreover, some
fields possess domain limitations (e.g.
text fields with a
maximum size, comboboxes).
To do an analysis of the form and extract relevant information
is not an easy task, but the most difficult step
surely is to extract the field's labels. This is because generally
there is not a formal relationship between them in the
HTML code. For example, the label for a text field can be
placed above it, separated by a BR tag, it can be beside it,
or it can be inserted inside table cells.
All these pieces of data are absolutely necessary to be extracted
for surpassing HTML forms and fetching the results
page.
THE SMARTCRAWL
The aim of SmartCrawl is to bring a strategy that allows a
more complete exploration of the hidden content. To achieve
this, it managed to generate values for a largest number of
forms.
Furthermore, it has an architecture very similar to current
commercial search engines, which means it permits an easier
implantation of strategies vastly used to gain performance
and scalability as presented by [10] or [2].
4.1
Execution of the Prototype
SmartCrawl is above all a search engine and, therefore,
contains all its essential components. The difference is in the
fact that each component has adaptations and some extra
features which enable them to explore the hidden content
of the Web. The main goal is to index only the pages that
potentially are in the non-explorable part of the Web.
To extract the content from behind these forms, SmartCrawl
generates values for its fields and submits them. These values
are chosen in two different moments: in the indexing
and when an user performs a search.
In the indexing phase, once it finds a form, the SmartCrawl
extracts a set of pieces of information from it that allow
queries (combinations of possible values for the form) to be
created.
New queries are also generated when the user performs a
search. For this, the forms that are more likely to answer to
the search receive the supplied keywords. However, contrary
to the implementation of Lin and Chen [6] the obtained
results are also scheduled for indexing and not only returned
to the user interface.
The process of execution of SmartCrawl is constituted in
the following steps: (1) finding the forms, (2) generating
queries for them, (3) going to the results and (4) searching
created indexes.
4.1.1
Finding forms
The first step in the execution process of the SmartCrawl
is the creation of a number of crawlers that work in parallel
searching for pages that include HTML forms. Every page
found is then compressed and stored for further analysis. At
this moment, it acts like a common crawler, following only
links and references to frames.
The pages stored by the crawler are decompressed afterwards
by a indexing software component which extracts
pieces of information from each of the forms found and catalogues
them. Beyond this, every page is indexed and associated
with the forms. If the same form is found in distinct
pages, all of them are indexed. Nevertheless, there will be
only one representation of the form.
4.1.2
Generating queries for the forms
Another component of the indexing software is in charge of
generating values for the encountered forms. The generation
of queries is based on the collected information about the
form and its fields.
10
Search Results
1. CD:
Kinks Face To Face
2. CD:
Kinks Muswell Hillbillies
3. CD:
Kinks Misfits
...
Processing
(Server)
CDs
CDs
DVDs
DVDs
:
:
:
Kinks
CDs
CDs
DVDs
DVDs
:
CDs
CDs
DVDs
DVDs
Media:
Title:
Artist:
Kinks
Figure 1: A form processing
The first generated query is always the default, that is,
the one which uses all the defined values in the HTML code
of the form. Next, a pre-defined number k of other possible
combinations is generated. To generate values for the text
fields (which possess an infinite domain) a table that stores
a list of values for a data category is consulted based on the
field label.
For every generated query, a further visit is scheduled for
the crawler. The parameters (set of field names and values)
are stored and a new item is added to the queue of URLs
that must be visited by crawlers.
4.1.3
Visiting the results
The crawler is in charge of executing its second big goal
which is to submit the scheduled queries. To accomplish this
it needs an extra feature: the capacity to send parameters
in HTTP requests using both the GET and POST methods
. If it perceives that an item in the queue of URLs is
a query, it submits the parameters and analyses the HTML
code obtained as a result in the same way that it does to
others. The page is then compressed, stored and associated
with the information of the original query.
The indexing software decompress pages that contain results
of form submissions and indexes them. From this index
, the classification and search software finds results that
contain all the search terms formulated by the user.
4.1.4
Searching for stored pages
As soon as the user performs a search, two steps are exe-cuted
by the classification and search software. Firstly, the
indexes created by the indexing software are consulted to
find the keywords formed by the user and the results are returned
in an organised form (the most relevant come first)
in an HTML page.
Subsequently, based on these indexed pages which are associated
with the forms, SmartCrawl selects forms that are
more likely to answer to the user's search and generate new
queries which will be visited afterwards by the crawlers and
indexed in the same way by the indexing software.
ARCHITECTURE AND IMPLEMENTA-TION
The architecture at the high level of this application is
divided into: crawler, indexing software, ranking and search
software and storage components.
As we have seen, the crawler is responsible for obtaining
Web pages, submitting queries and storing the results. The
indexing software, on the other hand, indexes the obtained
pages and generates form queries. The ranking and search
software uses the indexes to answer searches made by the
user or redirects other forms and storage components take
care of the storage of all the information used by other components
.
Figure 2 shows how the main components are available
and how they interact with the storage components, that
are represented by the rounded boxes. They are the Form
Parser, Form Inquirer and Form Result Indexer of the indexing
software, the Document Seeker of the ranking and
search software and the Crawler Downloader of the crawler.
Two storage components support the crawling: URL Queue
and URL List. The first is responsible for storing the line of
URLs that the Crawlers Downloaders need to visit, in this
way, the URLs which will serve as seeds to the autonomous
are also added in it. As soon as a Crawler Downloader extracts
links from an HTML page, it is in this component
that the new URLs will also be inserted for a further visit.
The URL list, on the other hand, stores the URLs that have
already been visited, allowing the Crawler Downloader keep
track of them.
When the page includes a form, or a result to a query,
it needs to be stored for subsequent indexing. The crawler
stores the compressed content in a storage component named
Warehouse, where it is given a number called storeId.
Two components of the indexing software are responsible
for decompressing these pages that have been stored in the
Warehouse. The first, called the Form Parser, extracts information
from all the forms contained within the page, and
sends them to the storage component Form List, where a
number, called formId, is associated with every form. The
Form Parser is also responsible for indexing the page which
contains forms and associating it to each formId of the forms
contained within.
To index these documents, SmartCrawl uses a technique
called inverted index or inverted file. The Document List,
Wordmatch and Lexicon are the three components that carry
out storing an indexed page. The Document List stores the
title, a brief description of each indexed page and its docId.
The Lexicon and Wordmatch store the inverted index itself.
The first contains a list of pairs (wordId, word) for each one
of the words used in indexed documents. The second contains
a list of occurrences of the words in the indexed documents
and their position (offset) in the text. Wordmatch is
formed therefore by the values of docId, wordId and offset
The Form Inquirer is the indexing software component
whose objective is to generate queries for the stored forms
in the Form List. To generate values for the text fields, Form
Inquirer consults the list of categories and values through
the component Categories. Each query generated is sent to
the Query List where a queryId is associated to it and a
new URL is added to the URL Queue.
11
Warehouse
URL
Queue
URL
List
Query
List
Form
List
Categories
Wordmatch
Document
List
Lexicon
Crawler Downloader
Form Parser
Form Result Indexer
Document Seeker
New query
Form Inquirer
Figure 2: Architecture at a high level
The second component that extracts compressed pages
from the Warehouse is the Form Result Indexer. Its job is
too much easier than the others, as its aim is only to index
the pages that contain results of submitted queries and to
associate the proper queryId.
From the indexes created and stored in Lexicon and Wordmatch
, the Document Seeker answers to user searches. Every
document stored in the in the Document List possesses
a formId associated with it, and optionally, a queryId when
the document is the response to a query. With these two
numbers, the Document Seeker consults the Form List and
the Query List to obtain the necessary information about
the way it locates an indexed page on the World Wide
Web. For example, a query to a form that points to the
URL http://search.cnn.com/cnn/search using the HTTP
GET method can be represented by
http://search.cnn.com/cnn/search?q=brazil, if it possesses
only one parameter with name q and value brazil.
The Document Seeker should return the result set in an
ordered form, so that the most relevant documents will be
taking first place. A simple solution for this is to only take
into consideration the position of the words in the text, and
the number of occurrences. Considering o
i
, as the offset of
the i-th encountered word in the document which is amongst
the terms of the user, the relevance is given by this:
r =
n
X
i=0
1000
(o
i
+ 1)
(2)
In equation 2, we compute the rank of a page by summing
the offsets of all user search terms that appears in the document
and then divide the result by an arbitrary number (in
this case 1000) so that the most relevant entry receives the
smallest number.
Another important role of the Document Seeker is to redirect
the search terms supplied by the user to some of the
several forms catalogued in the Form List. To accomplish
this, it looks in the Document List for pages that contain
search engines (forms with one, and only one, text field) and
that possess in its text words related to the terms sought by
the user. New queries are then added to the Query List and
new URLs are added to the URL Queue to be visited by the
Crawler Downloader.
5.1
Labels extraction algorithm
A very important task is performed by a secondary component
called Form Extractor. It is in charge of extracting
diverse pieces of form information present in an HTML page.
To facilitate the content analysis of a page, the HTML
code is converted into a DOM
1
tree provided by Cyberneko
Html Parser [3]. From the DOM tree, the Form Extractor
looks for nodes which represent forms and separate them
from the rest of the tree. Each of these sub-trees, which
encompass all the tags which are positioned between the
<form> and the </form>, is submitted for processing.
Amongst the data which should be obtained, undoubtedly
the fields and their labels are the most challenging ones. In
spite of having a tag in the HTML specification called label
for the declaration of a label, it is almost not used and,
therefore, we do not possess a formal declaration of labels
in the HTML code.
The solution encountered was to establish a standard that
the labels must have. For the Form Extractor, labels are
continuous segments of texts which use the same format
and have the maximum of n words and k characters. These
values can be defined in the configuration file.
1
Document Object Model
12
DVDs
CDs
DVDs
Media:
Title:
Artist:
Kinks
(a) HTML form example
1
2
1
2
3
4
{
"Artist:"}
Label
{
"Title:"}
Label
{
"Media:"}
Label
{
"Submit",
"Reset"}
Button
Button
{
"artist"}
Textfield
{
"title"}
Textfield
{
"cds",
"CDs",
"dvds",
"DVDs"}
Checkbox
Label
Checkbox
Label
(b) Table representing the position of the elements of the form
Figure 3: Representing the positions of the components of an HTML form
CheckboxField
Label: "Media:"
Name: "media"
Options:
TextField
Label: "Artist:"
Name: "artist"
Value: ""
Form
Action: "Search.jsp"
Request Method: POST
Fields:
TextField
Label: "Title:"
Name: "title"
Value: ""
SubmitButtonField
Name: "Submit"
Value: "submit"
Option
Label: "CDs"
Value: "cd"
Option
Label: "DVDs"
Value: "dvd"
Figure 4: An example of a HTML form representation
From the sub-tree which contains a certain form information
, the Form Extractor generates a table which represents
the positioning of the elements contained in it. Figure 3
shows an example of the table generated by a simple form.
The table is generated considering the nodes in the DOM
tree which represent a common HTML table. If there are
more than one defined table in the HTML code, similar representations
are created. Each cell has a collection of the
form's elements. The third step of the process is to extract
the labels of each one of the fields in the form (except for
hidden fields and buttons) and generate an object-orientated
representation for the forms.
To extract the labels, the Form Extractor passes twice by
the generated table to the form. The first time, for each field
in the form which require a label, it is verified exactly what
exists on the left side of the field (even in one adjacent cell).
If a label is found in this position, this label is immediately
associated to the field. In the second passage, the fields
which still do not have association with labels are observed
again, however this time the search for the label is done in
the above cell.
For the fields of the checkbox and combobox kind, the
treatment is special, because apart from the conventional
label, the items which represent their domain also have labels
.
As in the case of "DVDs" and "CDs" labels in Figure
3(a). The domain labels are extracted from the right, and
the label of the set of items is obtained from the left. In
the example, the checkboxes are grouped by their names,
which in this case is "media." The labels of each item are
extracted from the right ("CDs" and "DVDs") and the label
of the set of checkboxes is extracted from the left of the first
field ("Media:").
Figure 4 shows how the object-oriented structure for the
example above would be.
5.2
The list of categories and values
The Categories component is in charged of controlling a
list of categories and values which helps the Form Inquirer
to generate values for the text fields. In order to guarantee
better results, the name of the category is normalized before
comparing to the field label. The normalization aims to:
(1) remove punctuation (leaving just the words), (2) convert
graphic signing and other special characters into simple ones
and (3) remove stop words.
Stop words is a concept given to the words which can be
taken from sentences without changing its meaning, being
largely used in normalization and some search engines even
extinguish these words from their indexes. Great part of the
Stop Words are prepositions, articles and auxiliary verbs,
such as "the", "of" or "is."
The list of categories is automatically built by the Form
Parser. Once it finds a field with finite domain (e.g. comboboxes
), the values are extracted and added to the categories
list associated to the field's label.
When the same value is added more than once to a category
, it gets more priority in relation to the others. This
is obtained by using a number which means the relevance
of this value in the category. It is based on this number
that the set of values is put in order before it is repassed to
the Query Inquirer. Therefore, the most relevant values are
tested first in text fields.
13
5.3
Redirecting queries
As mentioned before, associated to each form, there is a
set of indexed pages where it has been found. These pages
allow the Document Seeker to choose the forms to which the
user's search terms will be redirected to.
In order to choose amongst several forms, two steps are
taken: (1) finding which words or sequences of words are
related to the terms of the user's search and (2) looking for
these words on the pages which have small search engines.
To solve the first problem, we could use the stored index
itself. However, the volume of indexed information is not so
large as to provide a good set of words. It was used, therefore
, the catalogue of a general purpose search engine called
Gigablast
2
, since it implements data mining techniques and
provides, for the searching terms, a list of words or sentences
which frequently appear in the returning documents.
From this set of words, the Document Seeker performs
search on pages which have search engines looking for these
terms, also putting them in order according to equation 2.
For the first n selected search engines, the Document Seeker
creates new queries and add them to the queue so that they
can be submitted afterwards in the same way done by the
Form Inquirer. The queries are created by filling in form's
only text field with the searching terms of the user and the
other fields with default values.
Search form
Results
New queries
Figure 5: Interface for the search of indexed documents
5.4
The searching interface
A searching interface was build for the purpose of carrying
out the tests. As shown in Figure 5, it is divided into three
parts.
The searching form allows the user to provide the terms
of the search. Furthermore, it is possible to choose what
the target of the search is: all the pages, only pages which
have forms or pages containing the results of forms. The
results are shown on a list of documents that contain all the
searching terms offered by the user. On pages that contain a
2
http://www.gigablast.com/
query associated to them, it is possible to visualise also the
parameters. In the area called new queries the generated
queries which have been scheduled for the crawler's visit are
displayed.
EXPERIMENTAL RESULTS
The tests which have been carried out aim to test some
strategies used in the implementation of the prototype, hence
some important aspects of the system were tested separately.
Besides that, an analysis of the indexed content regarding its
absence or not in the current searching engines was carried
out.
In order to support the tests, we started up the crawlers
and kept them up until we have 15 thousands indexed pages
(including only pages with HTML forms and form results).
It worth repeat that our strategy does not index pages that
do not offer any challenge to regular crawlers.
6.1
Label extraction algorithm evaluation
This phase aims to evaluate the algorithm used for the
extraction of the labels from the form fields that was de-scribed
in section 5.1. To do so, 100 forms were manually
observed and compared to the information extracted by the
Form Extractor. For each one of these fields it was verified
whether the choice made by the algorithm was correct or
not. From the 100 forms evaluated, 5 of them (5%) were
not extracted.
The reason for this is that the HTML was malformed and
the API used for the extraction of the DOM tree (NekoHtml
[3]) did not manage to recover the error. This way, 189 fields
from the 95 remaining forms were verified. For 167 of them
(88%), the algorithm extracted the label correctly, making
mistakes only in 22 labels (see Figure 6).
Labels
extracted (88%)
correctly
Labels
extracted (12%)
incorrectly
Figure 6: Fraction of labels extracted correctly
Some labels were not extracted correctly because they did
not fit within the restrictions defined in section 5.1. Another
problem faced was when the labels were not defined inside
the tag FORM. In this case they were not present in the sub-tree
analysed by the Form Extractor, making its extraction
impossible.
Although our solution did not reach the HiWE [7] accuracy
to extract labels, we prove that it is possible to get
very close results without rendering the page (that consumes
much computing resources). Moreover, many of the problems
faced in this experiment can be fixed without much
effort.
14
6.2
Relevances of the queries generated by the
Document Seeker
In order to analyse the results obtained by the new generated
queries from the searches of users, 80 search queries
were submitted to the prototype by using arbitrary terms
that are commonly used in general purpose search engines,
such as "World Cup" or "Music Lyrics".
For each list of queries generated by the Document Seeker,
the first five ones were submitted and analysed manually,
totalizing 155 pages with results. Each page was verified
whether the query was successful or not. A successful query
is one which has one or more results in it, in contrast to pages
with no result or pages that was not considered a search
engine results page (e.g. mailing list registration form).
Errors (10%)
Queries with no results (24%)
Successful queries (66%)
Figure 7: Utilizations of the new queries generated
by
Document Seeker
The result obtained was that 66% of the submitted queries
brought some results back, 24% were not successful and 10%
of the pages, for some reason, could not be recovered. Figure
7 illustrates better the obtained utilization.
Once the most relevant queries are returned taking first
place, it is probable that, with a larger number of indexed
pages, we will get better results.
6.3
Visibility of the indexed content
The implementation of SmartCrawl has as aim the pages
generated from the filling of the HTML forms and which are
potentially part of the hidden web. Despite the fact that the
current commercial search engines do not have an automatic
mechanism which fills in the forms fields and obtain these
data, as the SmartCrawl does, through common links, part
of this information can be explored.
For instance, a page with the results of a form that utilizes
the HTTP method GET can be accessed through an usual
link because all its parameters can be passed in the URL
string itself. In addition to this, once the content is stored
in databases, it is not so difficult to find pages that offer the
same information through different interfaces.
This phase aimed to verify how much of the indexed content
can also be accessed by Google. In order to do this,
300 pages with results of GET and POST forms were observed
if, for one of the reasons stated before, they were not
indexed by this general purpose search engine.
We found out that 62% of these pages are not indexed
by Google. When only queries which use the method HTTP
POST (which are 59% from the total) are observed, this
number becomes even greater, leaving just 14% reachable
by Google.
CONCLUSIONS AND FUTURE WORK
This work proposed a search engine prototype which is capable
of handling with HTML forms as well as filling them
in automatically in order to obtain information which is un-reachable
by the current search engines. When compared to
other solutions, as the HiWE [7] and the solution that redirects
queries by Lin and Chen [6], the SmartCrawl brings a
big differential which is the ability of surpassing great number
of forms. The mentioned solutions have severe restrictions
which directly affect the number of forms that receives
queries.
There is a great deal of work still to be attached to the solution
for a better exploration of the recovered content. An
example of this is that SmartCrawl does not make any analysis
of the pages obtained as results of the queries, therefore
indexing pages which contain errors and no results. The implementation
of an algorithm which recognizes these pages
would increase the quality of the indexed data.
Besides, a high performance structure was not used for
the storage of the indexes. This resulted in slow searching
and indexing. A future work will be the implementation of
a new indexing module.
REFERENCES
[1] M. K. Bergman. The Deep Web: Surfacing Hidden
Value. 2001.
[2] S. Brin and L. Page. The anatomy of a large-scale
hypertextual Web search engine. Computer Networks
and ISDN Systems, 30(17):107117, 1998.
[3] A. Clark. Cyberneko html parser, 2004.
http://www.apache.org/ andyc/.
[4] J. Lage, A. Silva, P. Golgher, and A. Laender.
Collecting hidden web pages for data extraction. In
Proceedings of the 4th ACM International Workshop
on Web Information and Data Management, 2002.
[5] S. Liddle, D. Embley, D. Scott, and S. H. Yau.
Extracting data behind web forms. In Proceedings of
the Workshop on Conceptual Modeling Approaches for
e-Business, pages 3849, 2002.
[6] K.-I. Lin and H. Chen. Automatic information
discovery from the invisible web. In Proceedings of the
The International Conference on Information
Technology: Coding and Computing (ITCC'02), pages
332337, 2002.
[7] S. Raghavan and H. Garcia-Molina. Crawling the
hidden web. In Proceedings of the 27th International
Conference on Very Large Databases, pages 129138,
2001.
[8] A. Rappoport. Checklist for search robot crawling and
indexing, 2004.
http://www.searchtools.com/robots/robot-checklist
.html.
[9] C. Sherman and G. Price. The Invisible Web:
Uncovering Information Sources Search Engines Can't
See. CyberAge Books, 2001.
[10] V. Shkapenyuk and T. Suel. Design and
implementation of a high-performance distributed web
crawler. In Proceedings of the 18th International
Conference on Data Engineering, pages 357368.
[11] D. Sullivan. Internet Top Information Resource, Study
Finds, 2001.
15 | implementation;architecture;Label Extraction;experimentation;html form;SmartCrawl;web crawler;hidden web content;information retrieval;search engine;Search Engine;extraction algorithm;Hidden Web |
182 | Sparsha: A Comprehensive Indian Language Toolset for the Blind | Braille and audio feedback based systems have vastly improved the lives of the visually impaired across a wide majority of the globe. However, more than 13 million visually impaired people in the Indian sub-continent could not benefit much from such systems. This was primarily due to the difference in the technology required for Indian languages compared to those corresponding to other popular languages of the world. In this paper, we describe the Sparsha toolset. The contribution made by this research has enabled the visually impaired to read and write in Indian vernaculars with the help of a computer. | INTRODUCTION
The advent of computer systems has opened up many avenues for
the visually impaired. They have benefited immensely from
computer based systems like automatic text-to-Braille translation
systems and audio feedback based virtual environments. Automatic
text-to-Braille translation systems are widely available for languages
like English, French, Spanish, Portuguese, and Swedish [7, 26, 18,
16]. Similarly audio feedback based interfaces like screen readers
are available for English and other languages [ref c, 8, 20]. These
technologies have enabled the visually impaired to communicate
effectively with other sighted people and also harness the power of
the Internet.
However, most of these technologies remained unusable to the large
visually impaired population in the Indian sub-continent [17]. This
crisis can be attributed to primarily two reasons. First, the languages
in the mentioned region differ widely from other popular languages
in the world, like English.
These languages or vernaculars also use relatively complex scripts
for writing. Hence, the technologies used for English and other such
languages cannot be easily extended to these languages. Secondly,
the development of these technologies for Indian languages, right
from scratch, is not trivial as the various Indian languages also differ
significantly amongst themselves.
The Sparsha toolset uses a number of innovative techniques to
overcome the above mentioned challenges and provides a unified
framework for a large number of popular Indian languages. Each of
the tools of Sparsha will be discussed in detail in the following
sections. Apart from English the languages supported by Sparsha
include Hindi, Bengali, Assamese, Marathi, Gujarati, Oriya, Telugu
and Kannada. The motivation for this work is to enable the visually
impaired to read and write in all Indian languages. The toolset set
has been named Sparsha since the word "Sparsha" means "touch" in
Hindi, something which is closely associated with how Braille is
read.
BHARATI BRAILLE TRANSLITERATION
Bharati Braille is a standard for writing text in Indian languages
using the six dot format of Braille. It uses a single script to represent
all Indian languages. This is done by assigning the same Braille cell
to characters in different languages that are phonetically equivalent.
In other words, the same combination of dots in a cell may represent
different characters in each of the different Indian languages.
However, a single character in an Indian language may be
represented by more than one Braille cell.
The above mentioned characteristics of Bharati Braille code is
illustrated in Figure 1. There are many other issues and rules related
to Bharati Braille. These will be discussed in the following sections
along with the methods used for implementing them.
Figure 1. Examples of characters in Indian languages and their
corresponding Bharati Braille representation
2.1 Transliteration to Bharati Braille
As shown in Figure 1, characters from different Indian languages
can be mapped to the same Braille representation. Thus, in order to
implement this, the system uses separate code tables for each of the
languages and depending on the users choice of input language the
corresponding code table is used. The said method of
implementation also makes the system highly scalable and allows
the inclusion of more languages in future if required. For instance
this technique is being used successfully to extend the system to
include Urdu and Sinhala. This work is expected to be completed in
the near future.
Figure 2. Formation of Conjugates
Another important aspect of Indian languages is the formation of
consonant clusters or conjugates. In traditional hand written text this
may be expressed conceptually as the first consonant followed by a
special character called halanth which in turn is followed by the
second character. The consonant cluster may again be followed by a
vowel. However, the visual representation of such a consonant
cluster or conjugate may be quite different from the visual
representation of each of the individual consonants included in it, as
shown in Figure 2. However, while translating the same text into
Bharati Braille the special character halanth must precede both the
consonants to be combined into a single conjugate.
The above constraints necessarily mean that the Braille translation
for a particular character also depends on the sequence of characters
preceding and following it.
Hence, in order to perform the tasks efficiently the system uses a
finite state machine based approach similar to that of lexical
analyzers [3, 6]. The mentioned approach also proves to be suitable
for handling other issues associated
with standard Braille
translation like detection of opening and closing quotation
marks, string of uppercase characters.
Apart from Indian languages the Sparsha system supports the
translation of English language texts into grade 1 and grade 2
Braille. The system maintains a database of all standard Braille
contractions which is used for generating grade 2 Braille.
Furthermore, the system allows the user to add new contractions to
the existing database.
The Sparsha system also supports the proper translation of a
document containing text both in English as well as an Indian
Language. According to standard Braille notations [11] the change
in language is indicated through the proper use of the letter sign.
However, a single document containing text in more than one Indian
language cannot be translated into Braille, such that the reader is
able to distinguish each of the languages correctly. This is due to the
following reason. As mentioned previously the same Braille
representation can refer to different characters in different Indian
languages, this leads to the inherent ambiguity.
Figure 3. A screenshot of the interface for translating and
editing Braille in the Sparsha system
2.2 Reverse transliteration
The Sparsha system allows reverse transliteration of Braille to text
both for Indian languages as well as English. This allows the
visually impaired to communicate seamlessly with other sighted
people. The Braille code to be translated may be entered into the
computer using a standard six key Braille keyboard. After
translating the Braille code into text, the visually readable text may
then be checked for correctness using a file reading system which
will be described in later section.
In order to achieve reverse translation from Braille to text, the
system uses a finite state machine based approach similar to that
used for translating text to Braille as described previously. The task
of reverse translation also uses the code tables corresponding to the
language to which the text is being translated. Thus the system can
easily be extended to other languages just by adding the
corresponding code tables to achieve both forward and
reverse translation.
2.3 Methods of Input - Output
Sparsha can accept English text, for translation, in the form of plain
text files, HTML (hyper text markup language) files and Microsoft
Word documents. Apart from English the Sparsha Braille translation
system, as described, can take input text in Indian languages. This
input can be given to the system in a number of forms as follows:
ISCII (Indian Script Code for Information Interchange)
[24] documents generated by applications like iLeap [10]
LP2 documents generated by iLeap [10]
Unicode text generated by any standard editor
supporting Unicode [25]. This technique will be
discussed in detail in a later section
The output of the Braille translation can be obtained on a large
variety of commercial Braille embossers [23]. The Sparsha system
has been tested on the following Braille embossers:
Index Basic-S
Index Basic-D
Index 4X4 PRO
+ + + =
115
Braillo 400
Modified Perkins Brailler [15]
Alternatively the output may be obtained on tactile Braille displays
[1].
BRAILLE MATHEMATICS
At the time of this development there existed a few translators for
converting mathematics to Braille [b]. However, these were found to
be unsuitable for the visually impaired in the Indian sub-continent
due to a number of reasons. Firstly the Braille code used for
mathematics in India is slightly different from those used in other
parts of the world [4], however, it bears close resemblance to the
Nemeth code [5]. Secondly the interleaving of Braille mathematics
with text in Indian languages was also not possible with the
available systems. Thirdly many of these systems require a working
knowledge of LaTex [13]. This cannot be expected from every user.
Finally, most of these systems are unaffordable to the visually
impaired in the Indian sub-continent.
The above mentioned reasons warranted the development of a
mathematics-to-Braille translation system for the Indian subcontinent
. The system thus developed can translate almost all
mathematic and scientific notations. It also allows the user to
interleave mathematic and scientific expressions with text in both
Indian languages and English.
In order to allow the user to write complex mathematic and
scientific expressions, the system provides a special editor for the
purpose. The above mentioned editor is named "Nemeth editor"
after Abraham Nemeth [t]. Thus the user is exempted from the task
of learning LaTex. The editor provides a GUI (Graphic User
Interface) as shown in Figure 4 for writing a mathematic or scientific
expression in a form similar to that used by LaTex. This string can
then be readily converted into Braille by the translation engine.
However, the mathematical expression formed by the editor must be
enclosed within a pair of special character sequences. This needs to
be done so that when the mathematic or scientific expression is
embedded within another English or Indian language text, it is
properly translated to Braille using the standard for mathematic and
scientific notation.
Figure 4. Screenshot of the Nemeth Editor
The selection of mathematical symbols and notations is done by the
user in a menu driven fashion using the GUI. The set of all
mathematic and scientific notations is partitioned in to separate
collections, each consisting of similar notations. Alternatively the
text may be entered by the user in a LaTex like format using any
standard text editor.
SPARSHA CHITRA
Elementary tactile graphics is one of the best methods for
introducing certain subjects, like geometry, to visually impaired
students. However, such tactile graphics have remained outside the
reach of the common man. This is due to the fact that sophisticated
Braille embossers and expensive image conversion software are
necessary for the purpose. Sparsha Chitra aims to provide relatively
simple tactile graphics which can be obtained even by using low
cost Braille embossers like the modified Perkins Brailler [15]. In
other words no assumptions have been about any special feature of
the Braille embosser being used. This allows tactile graphics to be
embossed using just the Braille embossing capability of the
embosser. The tactile graphics obtained from any image may be
viewed and edited before finally being embossed. The image may
also be scaled up or down to a size suitable for embossing. The
system also allows the image color to be inverted in order to
improve the contrast.
Sparsha Chitra takes its input in HTML format such that additional
text can be included along with the tactile representation of the
image. Sparsha is the feeling of touch and "Chitra" in Hindi means
"picture" and thus this tool is named Sparsha Chitra.
The primary limitation of this tool is that complex images cannot be
represented very clearly. However, the effect of this drawback is
mitigated by the fact that the amount of detail that can be observed
through touch is also limited. Furthermore the size of the tactile
image is restricted by the bounds imposed by the sheet on which it is
embossed. The functions for scaling the tactile image may prove to
be useful in such a case.
Figure 5. Screenshot of Spasha Chitra
FILE READER
In order to enter text into the computer, in English, a visually
impaired user can take the help of any standard screen reader [12, 8,
20 ]. Screen readers have proved to be vital to visually impaired
computer users [19].Such screen readers are commercially available.
However, such screen readers are not available for Indian languages.
116
This was primarily due to the reasons mentioned at the beginning of
this paper.
The file reader which will be described in this section will redeem
the situation and allow the user to type in text in Indian languages
using Microsoft Word. For performing other tasks related to the
operating system the user may use any of the standard screen
readers. The construction of such a file reader requires a number of
vital components [2]. These include text-to-speech engines for
Indian languages, fonts for Indian languages, keyboard layouts for
them, proper rendering engines and a text editor which can support
Indian languages. Each of these components will be described
briefly in the following sections. This will be followed by a
description of the overall architecture of the system and its
functioning.
5.2 Speech synthesis system
A speech synthesis system is vital for the functioning of any screen
reader. It is responsible for producing human voice rendition of the
text provided to it by the screen reader. In case of screen readers the
speech synthesis system should be able to deliver the voice in real-time
. This is necessary for visually impaired users to get
instantaneous audio feedback.
A multilingual screen reader necessarily needs a speech synthesis
system for each of the languages that it supports.
The mentioned file reader uses a speech synthesis engine for Indian
languages called Shruti [22]. Shruti support two popular Indian
languages namely Hindi and Bengali. It uses a method of di-phone
concatenation for speech synthesis. This allows the speech synthesis
system to produce reasonable real-time performance, at the same
time maintaining a low memory space requirement.
5.3 Fonts and Rendering
There are number issues involved with Indian language fonts and
their rendering. This is due to the fact that Indian language scripts
are generally complex in nature. The Microsoft Windows system
can be configure for correctly rendering these complex Indian
language scripts. Correct rendering of fonts is achieved through the
use of Uniscribe (Unicode Script Processor) and OTLS (OpenType
Layout Services) libraries [9, 14]. Furthermore glyph substitution
and glyph repositioning, as shown in Figure 2, are closely associated
with the rendering of text in Indian languages. For this reason
OpenType fonts have been found to be suitable for Indian languages
as they carry, within the font file, explicit information about glyph
substitution and glyph positioning. This maintained in the form of
two tables namely GSUB (Glyph Substitution) and GPOS (Glyph
Positioning).
5.4
Editor for Indian Languages
A number of text editor are available for Indian languages. Many of
these editors are difficult to use and are non-intuitive. On the other
hand it has been observed that Microsoft Word XP (Word 2002)
performs reasonably well for Indian languages when proper fonts
and rendering engines are used. Thus, Microsoft Word has been
used instead of creating a new editor for the file reading application
as shown in Figure 7. Microsoft Word also provides certain
additional features which have been used extensively for the
development of the file reader. These features have been discussed
in detail in the following paragraphs. The use of Microsoft Word
also motivates visually impaired users to switch to main stream
applications and also eliminates the effort of learning another
system.
Microsoft Word supports Unicode [25], hence it can accept text in
any Indian language. However, in order to enter text in an Indian
language in the Windows system a keyboard layout or IME (Input
Method Editor) [21] for that language is required. Keyboard layouts
are available for some popular Indian languages like Hindi. For
other Indian languages it may have to be created. In our case a
keyboard layout had to be created for Bengali.
Figure 6. File reader System Architecture
117
The capabilities of the Microsoft Word can be extended using COM
(Component Object Model) Add-Ins. Such Add-Ins are basically
programs that run within the framework provided by Microsoft
Word. The file reader has been developed in the form of such an
Add-In. It interacts closely with the editor to provide necessary
audio feedback for text in Indian languages. Such interaction takes
place through the object model exposed by Microsoft Word. The file
reader may be configured to start up every time Microsoft Word is
used.
5.5 Overall System Structure and Operation
The overall architectural structure of the file reader system is shown
in Figure 6. Most of the components of the system shown in the
figure have been discussed in the last few sections. The interaction
between the different components and how they operate as a system
will be discussed in this section.
Keyboard hooks are placed within the operating system by the file
reader Add-In. The keyboard hooks are responsible for trapping the
keystrokes entered by the user through the keyboard. A copy of the
entered keystrokes is passed to the file reader Add-In. The
keystrokes are then passed to the keyboard layout or IME which is
integrated with the operating system. The keyboard layout translates
the keystrokes into Unicode characters and passes them to the
editor.
Figure 7. Screenshot of the file reader in operation using
Microsoft Word
In the mean while the file-reader Add-In, on receiving the
keystrokes, provides appropriate audio feedback by invoking the
speech synthesis engine. Again, certain combinations of keystrokes
are recognized by the file reader Add-In as special commands.
These request the file reader to read out a certain portions of the
text. This selected text is then passed to the speech synthesis system
for producing human voice rendition. Thus providing an audio
feedback based virtual environment for Indian languages. The file
reader can be further extended to provide full screen reading
functionality by using Microsoft Active Accessibility.
SYSTEM EVALUATION
A subset of the Sparsha system known as the Bharati Braille
Transliteration System
*
has been deployed by Webel Mediatronics
Limited in a number of organizations for the visually impaired all
over India as a part of a project sponsored by the Ministry of
Communication and Information Technology, Government of India.
As a result of these field tests the system underwent an iterative
process of refinement to reach its current form. A plethora of
request and suggestions from visually impaired users led to the
development and inclusion of a number of additional features and
tools that were added to the toolset. These include the Sparsha
Chitra and the file readers for Indian languages. The process of
continuous feedback helped the Sparsha toolset mature over the
years. It also helped in weeding out many bugs and shortcomings of
the initial versions of the system.
6.2 Obtained
Results
The Sparsha system is under a continuing process of use and
evaluation. This feedback is being used to make the system more
usable to the visually impaired and to enhance the features provided
by the system. Training and deployment of the system has also been
carried out at a number of premier organizations for the visually
impaired. These include
The National Association for the Blind, Delhi
Blind Peoples' Association, Ahmedabad
National Institute for the Visually Handicapped,
Dehradun
The Braille translation system has been tested on a large number of
computers in these organizations. The typical performance
characteristics of the Sparsha Braille translation system is as shown
in Figure 8. The performance characteristics have been measured for
two different personal computer systems.
English Grade - II Braille
0
1000
2000
3000
4000
5000
6000
7000
200 600 1000 1400 1800 2200 2600 3000 3400 3800 5000
Word Count
Ti
m
e
(
m
i
l
l
i
s
e
c
on
ds
)
P4
P3
(a)
Grade - I Braille
0
100
200
300
400
500
200 600 1000 1400 1800 2200 2600 3000 3400 3800 5000
Word Count
T
i
m
e
(
m
illis
e
c
onds
)
P4 - English
P4 - Indian
Languages
P3 - English
P3 - Indian
Languages
(b)
Figure 8. Graphs showing the computation time taken during
Braille translation for (a) Grade II English (b) Grade I
English and Indian languages
118
The computers have the following specifications (Processor Type,
Primary Memory, Hard disk):
Intel Pentiun 4 3GHz, 512MB, 80GB referred to as
P4
Intel Pentium III 550MHz, 256MB, 40GB referred to
as P3
The Sparsha Chitra tool was tested by visually impaired users and
the obtained results are given in Figure 9. This was done by handing
them sheets of paper Braille paper with tactile diagrams created by
Sparsha Chitra and asking them to guess the image on the sheets.
Correct
guess
40%
Cannot
guess
20%
Close guess
35%
Wrong
guess
5%
Figure 9. User response to tactile images generated by Sparsha
Chitra
Most of these images were geometric figures. The majority of the
guesses were correct while a large percentage of the guesses were
very close like identifying a rectangle as square or a triangle as a
mountain. Such misinterpretations often occur due to lack of color
information or misjudging dimensions which is indeed quite
difficult estimate from tactile representations.
The file reader tool needs extensive training before a nave user can
use it efficiently. Visually impaired users who are already familiar
with Jaws or other screen readers can adapt to this system very
quickly. This tool was primarily tested by visually impaired users
having reasonable experience with Jaws.
0
5
10
15
20
25
30
Jaws
File reader for Indian
Languages
W
o
rd
s
p
e
r
M
i
n
u
t
e
Figure 10. Comparison of the typing speed of a visually impaired
user using Jaws and the Indian language file reader
The Indian language file reader could not be experimented with a
large number of users since a good level of expertise with screen
readers is required for using the file reader efficiently. The
experimental results shown in Figure 10 and 11 pertain to a
particular visually impaired user having some experience with Jaws.
The experiments were carried out by dictating a paragraph of about
hundred words to the user while he typed it into the computer using
the Indian language file reader. However, this is only a preliminary
experiment. It was also found that both the typing speed and the
error rates improved significantly with practice.
No Errors
10%
One Error
30%
Two Errors
40%
More than 2
Errors
20%
Figure 11. Number of words with errors for every ten
words
LIMITATIONS AND FUTURE WORK
The Sparsha toolset is in the process of being extended to a number
of other languages in the Indian subcontinent. This includes Urdu
and Sinhala. The file reader in the Sparsha toolset is also limited by
the availability of text to speech synthesis engines for all Indian
languages. Hopefully these will be available in the near future and
allow the system to be extended to more languages. It is also
envisioned that the Sparsha system will be ported to mobile
handheld systems. This will enable the visually impaired to
communicate on the move. As of now, the required text to speech
synthesis engines have been ported onto the Microsoft Pocket PC
platform as well as on an ARM-Linux platform. It is only a matter of
time before the file reader becomes functional on such mobile
platforms.
CONCLUSION
The Sparsha system named after the feeling of touch has been the
first attempt to help visually impaired users, in the Indian
subcontinent, read and write in their native tongues. In this paper the
various aspects of Indian languages and how they differ from other
languages in the world has been explained. It also been discussed
how these issues have been tackled in the Sparsha system.
The paper describes in depth the various tools included in the
Sparsha toolset. These tools form a comprehensive toolset for Indian
languages. It can be hoped that the Sparsha system would help
increase the literacy rates among the 13 million visually impaired in
the Indian subcontinent.
ACKNOWLEDGMENTS
The authors would like to thank Media Lab Asia for sponsoring a
part of the work related to the file reader. The authors would also
like to thank the National Association for the Blind, Delhi and many
other organizations for the blind for their sustained help and
cooperation during the entire development process. The authors owe
special thanks to Mr. Samit Patra, Director, Electrosoft Consultants
for his enormous help with many technical aspects of the work.
119
REFERENCES
[1] Basu Anupam, Roy S., Dutta P. and Banerjee S., "A PC
Based Multi-user Braille.Reading System for the Blind
Libraries", IEEE Transactions on Rehabilitation Engineering,
Vol. 6, No. 1, March 1998, pp.60--68
[2] Blenkhorn, P. "Requirements for Screen Access Software
using Synthetic Speech". Journal of Microcomputer
Applications, 16, 243-248, 1993.
[3] Blenkhorn Paul, "A System for Converting Braille to Print",
IEEE Transactions on Rehabilitation Engineering, Vol. 3, No.
2, June 1995, pp. 215-221
[4] Braille Mathematics Code for India Manual, Prepared under
the project "Adoption and Introduction of an Appropriate
Braille Mathematics Code for India", sponsored by UNICEF,
Published by National Institute for Visually Handicapped,
Dehra Dun and National Association for the Blind, Bombay,
India
[5] Cranmer T. V. and Abraham Nemeth, A Uniform Braille
Code, memo to the members of the BANA Board (January 15,
1991); Available at: http://www.nfb.org or
http://world.std.com/~iceb/
[6] Das, P.K.; Das, R.; Chaudhuri, A., "A computerised Braille
transcriptor for the visually handicapped". Engineering in
Medicine and Biology Society, 1995 and 14th Conference of
the Biomedical Engineering Society of India. An International
Meeting, Proceedings of the First Regional Conference, IEEE.
15-18 Feb. 1995 Page(s):3/7 - 3/8
[7] Duxbury Braille Translator, 2000,
http://www.duxburysystems.com/products.asp
[8] HAL. Dolphin Computer Access,
http://www.dolphinuk.co.uk/products/hal.htm
[9] Hudson, John for Microsoft Typography, "Windows Glyph
Processing : an Open Type Primer", November 2000,
http://www.microsoft.com/typography/glyph%20processing/i
ntro.mspx
[10] iLeap. Centre for Development of Advanced Computing.
http://www.cdacindia.com/html/gist/products/ileap.asp
[11] International Council on English Braille (ICEB), Unified
English Braille Code (UEBC) Research Project,
http://www.iceb.org/ubc.html
[12] JAWS for Window. Freedom Scientific.
http://www.freedomscientific.com/fs_products/software_jaws.
asp
[13] Lamport L. LaTeX - A Document Preparation System,
Addison-Wesley, 1985, ISBN 0-201-15790-X.
[14] Microsoft Typography, "Specifications : overview",
http://www.microsoft.com/typography/SpecificationsOvervie
w.mspx
[15] Modified Perkins Brailler, Webel Mediatronics Limited.
http://www.braille-aids.com/emboss.htm
[16] MONTY, VisuAide. http://www.visuaide.com/monty.html
[17] National Association for the Blind, India, 2002. Available at
http://www.nabindia.org/sited/infor06.htm
[18] NFBTRANS. National Federation of the Blind, 2004,
http://www.nfb.org/nfbtrans.htm
[19] Pennington C.A. and McCoy K.F., Providing Intelligent
Language Feedback or Augmentative Communication Users,
Springer-Verlag, 1998.
[20] Raman T.V. (1996). "Emacspeak a speech interface".
Proceedings of CHI96, April 1996
[21] Rolfe, Russ "What is an IME (Input Method Editor) and how
do I use it?" http://www.microsoft.com/globaldev/handson
[22] Shruti, Media Lab Asia Research Laboratory, Indian Institute
of Technology, Kharagpur.
http://www.mla.iitkgp.ernet.in/projects/shruti.html
[23] Taylor Anne, "Choosing your Braille Embosser", Braille
Monitor, October 200. Available at
http://www.nfb.org/bm/bm01/bm0110/bm011007.htm
[24] Technology Development for Indian Languages, Department
of Information Technology, Ministry of Communication &
Information Technology, Government of India. Available at
http://tdil.mit.gov.in/standards.htm
[25] Unicode. http://www.unicode.org
[26] WinBraille. Index Braille.
http://www.braille.se/downloads/winbraille.htm
120
| audio feedback;Indian languages;Braille;Visual impairment |
183 | StyleCam: Interactive Stylized 3D Navigation using Integrated Spatial & Temporal Controls | This paper describes StyleCam, an approach for authoring 3D viewing experiences that incorporate stylistic elements that are not available in typical 3D viewers. A key aspect of StyleCam is that it allows the author to significantly tailor what the user sees and when they see it. The resulting viewing experience can approach the visual richness and pacing of highly authored visual content such as television commercials or feature films. At the same time, StyleCam allows for a satisfying level of interactivity while avoiding the problems inherent in using unconstrained camera models. The main components of StyleCam are camera surfaces which spatially constrain the viewing camera; animation clips that allow for visually appealing transitions between different camera surfaces; and a simple, unified, interaction technique that permits the user to seamlessly and continuously move between spatial-control of the camera and temporal-control of the animated transitions. Further, the user's focus of attention is always kept on the content, and not on extraneous interface widgets. In addition to describing the conceptual model of StyleCam, its current implementation, and an example authored experience, we also present the results of an evaluation involving real users. | INTRODUCTION
Computer graphics has reached the stage where 3D models
can be created and rendered, often in real time on
commodity hardware, at a fidelity that is almost
indistinguishable from the real thing. As such, it should be
feasible at the consumer level to use 3D models rather than
2D images to represent or showcase various physical
artifacts. Indeed, as an example, many product
manufacturers' websites are beginning to supply not only
professionally produced 2D images of their products, but
also ways to view their products in 3D. Unfortunately, the
visual and interactive experience provided by these 3D
viewers currently fall short of the slick, professionally
produced 2D images of the same items. For example, the
quality of 2D imagery in an automobile's sales brochure
typically provides a richer and more compelling
presentation of that automobile to the user than the
interactive 3D experiences provided on the manufacturer's
website. If these 3D viewers are to replace, or at the very
least be at par with, the 2D imagery, eliminating this
r,
viewpoint in the scene
difference in quality is critical.
The reasons for the poor quality of these 3D viewers fall
roughly into two categories. First, 2D imagery is usually
produced by professional artists and photographers who are
skilled at using this well-established artform to convey
information, feelings, or experiences, whereas creators of
3D models do not necessarily have the same established
skills and are working in an evolving medium. Howeve
this problem will work itself out as the medium matures.
The second issue is more troublesome. In creating 2D
images a photographer can carefully control most of the
elements that make up the shot including lighting and
viewpoint, in an attempt to ensure that a viewer receives the
intended message. In contrast, 3D viewers typically allow
the user to interactively move their
to view any part of the 3D model.
Figure 1. StyleCam authored elements
This results in a host of problems: a user may "get lost" in
the scene, view the model from awkward angles that
present it in poor light, miss seeing important features,
experience frustration at controlling their navigation, etc.
As such, given that the author of the 3D model does not
have control over all aspects of what the user eventually
sees, they cannot ensure that 3D viewing conveys the
intended messages. In the worse case, the problems in 3D
viewing produce an experience completely opposite to the
authors intentions!
The goal of our present research is to develop a system,
which we call StyleCam (Figure 1), where users viewing
3D models can be guaranteed a certain level of quality in
terms of their visual and interactive experience. Further, we
intend that the system should not only avoid the problems
suggested earlier, but also have the capability to make the
interactive experience adhere to particular visual styles. For
example, with StyleCam one should be able to produce an
interactive viewing experience for a 3D model of an
automobile "in the style of" the television commercial for
that same automobile. Ultimately, a high-level goal of our
research is to produce interactive 3D viewing experiences
where, to use an old saying from the film industry, "every
frame is a Rembrandt".
1.1. Author vs. User Control
Central to our research is differentiating between the
concept of authoring an interactive 3D experience versus
authoring a 3D model which the user subsequently views
using general controls. If we look at the case of a typical
3D viewer on the web, in terms of interaction, the original
author of the 3D scene is limited to providing somewhat
standard camera controls such as pan, tumble and zoom.
Essentially, control of the viewpoint is left up to the user
and the author has limited influence on the overall
experience.
From an author's perspective this is a significant
imbalance. If we view an interactive experience by
cinematic standards, an author (or director) of a movie has
control over several major elements: content/art direction,
shading/lighting, viewpoint, and pacing. It is these elements
that determine the overall visual style of a movie. However,
in the interactive experience provided by current 3D
viewers, by placing control of the viewpoint completely in
the hands of the user, the author has surrendered control of
two major elements of visual style: viewpoint and pacing.
Thus we desire a method for creating 3D interactive
experiences where an author can not only determine the
content and shading but also the viewpoints and pacing.
However, intrinsic in any interactive system is some degree
of user control and therefore, more accurately, our desire is
to allow the author to have methods to significantly
influence the viewpoints and pacing in order to create
particular visual styles. Thus, we hope to strike a better
balance between author and user control. In order to
achieve this end, StyleCam incorporates an innovative
interaction technique that seamlessly integrates spatial
camera control with the temporal control of animation
playback.
CONCEPTUAL MODEL
In order to provide author control or influence over
viewpoints and pacing, we need a way for an author to
express the viewpoints and the types of pacing they are
interested in. Thus we have developed three main elements
upon which our StyleCam approach is based.
1. Camera surfaces an author-created surface used to
constrain the users' movement of the viewpoint
2. Animation clips an author-created set of visual
sequences and effects whose playback may be
controlled by the user. These can include:
sophisticated camera movements.
Slates 2D media such as images, movies,
documents, or web pages.
visual effects such as fades, wipes, and edits.
animation of elements in the scene.
3. Unified UI technique The user utilizes a single
method of interaction (dragging) to control the
viewpoint, animation clips, and the transitions between
camera surfaces.
2.1. Camera Surfaces
In the motion picture industry a money-shot is a shot with a
particular viewpoint that a director has deemed "important"
in portraying a story or in setting the visual style of a
movie. Similarly, in advertising, money-shots are those
which are the most effective in conveying the intended
message. We borrow these concepts of a money-shot for
our StyleCam system. Our money-shots are viewpoints that
an author can use to broadly determine what a user will see.
Further, we use the concept of a camera surface as
introduced by Hanson and Wernert
[19, 36]
. When on a
camera surface, the virtual camera's spatial movement is
constrained to that surface. Further, each camera surface is
defined such that they incorporate a single money-shot.
Figure 2 illustrates this notion.
Camera surfaces can be used for various purposes. A small
camera surface can be thought of as an enhanced money-shot
where the user is allowed to move their viewpoint a bit
in order to get a sense of the 3-dimensionality of what they
are looking at. Alternatively, the shape of the surface could
be used to provide some dramatic camera movements, for
example, sweeping across the front grill of a car. The key
idea is that camera surfaces allow authors to conceptualize,
visualize, and express particular ranges of viewpoints they
deem important.
Intrinsic in our authored interactions is the notion that
multiple camera surfaces can be used to capture multiple
money-shots. Thus authors have the ability to influence a
user's viewpoint broadly, by adding different camera
surfaces, or locally by adjusting the shape of a camera
102
Volume 4, Issue 2
surface to allow a user to navigate through a range of
viewpoints which are similar to a single particular money-shot
. For example, as shown in Figure 2, camera surfaces at
the front and rear of the car provide two authored
viewpoints of these parts of the car in which a user can
"move around a bit" to get a better sense of the shape of the
front grille and rear tail design.
Figure 2. Camera surfaces. The active camera is at the
money-shot viewpoint on the first camera surface.
The rate at which a user moves around on a camera surface
(Control-Display gain) can dramatically affect the style of
the experience. In order to allow an author some control
over visual pacing, we provide the author with the ability to
control the rate at which dragging the mouse changes the
camera position as it moves across a camera surface. The
intention is that increasing/decreasing this gain ratio results
in slower/faster camera movement and this will influence
how fast a user moves in the scene, which contributes to a
sense of pacing and visual style. For example, if small
mouse movements cause large changes in viewpoint this
may produce a feeling of fast action while large mouse
movement and slow changes in movement produce a slow,
flowing quality. Figure 3 illustrates an example of variable
control-display gain, where the gain increases as the camera
gets closer to the right edge of the camera surface.
Figure 3. Variable control-display gain on a camera surface
2.2. Animation Clips
To support transitions between two camera surfaces, we use
animation clips as illustrated in Figure 4. An animation clip
can be thought of as a "path" between the edges of camera
surfaces. When a user navigates to the edge of a camera
surface, this triggers an animation. When the animation
ends, they resume navigating at the destination camera
surface. One obvious type of animation between the camera
surfaces would simply be an automatic interpolation of the
camera moving from its start location on the first camera
surface to its end location on the second camera surface
(Figure 4a). This is similar to what systems such as VRML
do. While our system supports these automatic interpolated
animations, we also allow for authored, stylized,
animations. These authored animations can be any visual
sequence and pacing, and are therefore opportunities for
introducing visual style. For example, in transitioning from
one side of the car to the other, the author may create a
stylized camera animation which pans across the front of
the car, while closing in on a styling detail like a front grille
emblem (Figure 4b).
The generality of using animation clips allows the author
the stylistic freedom of completely abandoning the camera-movement
metaphor for transitions between surfaces and
expressing other types of visual sequences. Thus animation
clips are effective mechanisms for introducing slates -- 2D
visuals which are not part of the 3D scene but are
momentarily placed in front of the viewing camera as it
moves from one camera surface to another (Figure 4c). For
example, moving from a view of the front of the car to the
back of the car may be accomplished using a 2D image
showing the name of the car. This mechanism allows the
use of visual elements commonly found in advertising such
as real action video clips and rich 2D imagery. In the
computer realm, slates may also contain elements such as
documents or webpages.
Figure 4. Three example animated transitions between
camera surfaces. (a) automatic transition, (b) authored
stylized transition, (c) slate transition.
The use of animation clips also allows for typical visual
transitions effects such as cross fades, wipes etc.
In addition to using animation clips for transitions between
camera surfaces, StyleCam also supports the animation of
elements in the 3D scene. These scene element animations
can occur separately or concurrently with transition
animations. For example, while the animation clip for the
visual transition may have the camera sweeping down the
side of the car, an auxiliary animation may open the trunk
to reveal cargo space.
Volume 4, Issue 2
103
The animation of scene elements can also be used to affect
extremely broad changes. For example, entire scene
transitions (similar to level changes in video games) may
occur when a user hits the edge of particular camera
surface.
At the author's discretion, temporal control of animation
clips can either be under user control or uninterruptable.
Overall, in terms of visual expression, these varying types
of animation clips allow an author to provide rich visual
experiences and therefore significantly influence the pacing
and style of a user's interaction.
2.3. Unified User Interaction Technique
While animation clips are effective for providing a means
to move between camera surfaces and introduce visual
styling elements, they also highlight the fundamental issue
of arbitrating between user control and system control. At
the heart of our system are two distinct types of behavior:
1) user control of the viewpoint, and 2) playback of
animation clips. In other systems these two types of
behavior are treated as distinct interactions. Specifically,
the user must stop dragging the camera viewpoint, then
click on something in the interface to trigger the animation,
dividing their attention and interrupting the visual flow. In
our system we wanted to use animations as a seamless way
of facilitating movement between camera surfaces. Thus we
needed a mechanism for engaging these animations that did
not require an explicit mouse click to trigger animation.
Ideally we wanted to leave the user with the impression that
they "dragged" from one camera surface to another even
though the transition between the surfaces was
implemented as an authored animation.
These two behaviors are fundamentally different in that
viewpoint control is spatial navigation and animation
control is temporal navigation. From a user interaction
standpoint, spatial behavior can be thought of as "dragging
the camera" while temporal control is "dragging a time
slider" or "scrubbing". Given this we required an
interaction model which allowed these two types of drags
to be combined together in a way that was well defined,
controllable, and corresponded to user's expectations.
Figure 5, which uses the finite-state-machine model to
describe interaction as introduced by
[5, 26]
, shows the
interaction model we developed. The key feature of this
model is the ability to transition back and forth from spatial
to temporal control during a contiguous drag. As a user
drags the camera across a camera surface (State 1, Spatial
Navigation) and hits the edge of the surface, a transition is
made to dragging an invisible time slider (State 2,
Temporal Navigation). As the user continues to drag, the
drag controls the location in the animation clip, assuming
that the author has specified the clip to be under user
control. Upon reaching the end of the animation, a
transition is made back to dragging the camera, however,
on a different, destination camera surface (State 1).
Button Up
Clip Finished
Button Up
State
0
State
1
State
2
State
3
Button
Down
Enter
Surface
Exit
Surface
Button
Down
Tracking
Dragging
in Space
Dragging
in Time
Tracking
during
Automatic
Playback
Stop
Playback
Spatial Navigation
Temporal Navigation
Figure 5. StyleCam interaction model.
The interaction model also handles a variety of reasonable
variations on this type of dragging behavior. A user may
stop moving when dragging an animation clip, thus pausing
the animation. If, however, when in State 2 the user
releases the mouse button during a drag, automatic
playback is invoked to carry the user to the next camera
surface (State 3). Should the user press the mouse button
during this automatic playback, playback is stopped and
temporal control by the user is resumed (return to State 2).
We found in practice that this interaction design enhanced
the user's feeling of being in control throughout the entire
experience.
DESIGN RATIONALE
At first glance, it may appear that the incorporation of
animation clips into StyleCam unnecessarily complicates its
authoring and use. After all, without animated transitions,
we would not have had to develop an interaction technique
that blended between spatial and temporal control. Indeed,
when we first began our research, our hope was to create a
system that simply involved spatial control of a constrained
camera.
Our first variation used a single camera surface that
surrounded the 3D object of interest. The camera was
constrained to remain normal to this single camera surface
at all times. While this gave the author more control than
using a simple unconstrained camera, we found that it was
difficult to author a single camera surface that encompassed
all the desirable viewpoints and interesting transitions
between those viewpoints. In order to guarantee desirable
viewpoints, we introduced the concept of money-shots that
were placed on the single camera surface. The parameters
of the camera were then determined based on its location on
the camera surface and a weighted average of the
surrounding money-shots. At this point, it was still difficult
to author what the user would see when not directly on a
money-shot. In other words, while money-shots worked
well, the transitions between them worked poorly.
To address this problem of unsatisfactory transitions, we
first replaced the concept of a single global camera surface
with separate local camera surfaces for each money-shot.
104
Volume 4, Issue 2
Then, to define transitions between these local camera
surfaces, we introduced the idea of animating the camera.
This led to the use of the three types of animation clips as
described earlier. Simply playing back the animation clips
between camera surfaces gave users the sense that they lost
control during this period. To maintain the feeling of
continuous control throughout, we developed our integrated
spatial-temporal interaction technique.
AN EXAMPLE EXPERIENCE
We illustrate how StyleCam operates by an example.
Figure 6 illustrates the system components and how they
react to user input, as well as screen shots of what the user
actually sees. The user starts by dragging on a camera
surface (position A). The path A-B shows the camera being
dragged on the surface (spatial navigation). At B, the user
reaches the edge of the camera surface and this launches an
animation that will transition the user from B to E. The zigzag
path from B to D indicates that the user is scrubbing
time on the animation (temporal navigation). Position C
simply illustrates an intermediate point in the animation
that gets seen three times during the interaction. At position
D, the user releases the mouse button, whereupon the
system automatically completes playing back the remainder
of the animation at the authored pacing. At position E, the
user enters another camera surface and resumes spatial
navigation of the camera as shown by path E-F. When the
user exits this camera surface at position F, another
animation is launched that will transition the user to
position J. Since the user releases the mouse button at
position F, the animation from F to J is played back at the
authored pacing. Since this animation is a slate animation,
the intermediate shots at positions G, H, and I along the
path F to J are of slates containing information on the car
fading in and out as the camera pans over the top of the car.
The net result of this StyleCam experience is a view of the
car that is far more visually rich and influenced by an
author who intends to convey a certain message, rather than
using simple camera controls as is typical in current 3D
viewers.
RELATED WORK
Much prior research has explored camera techniques for 3D
virtual environments. Many of the techniques use a 2D
mouse or stylus as an input device and introduce metaphors
to assist the user. Perhaps the most ubiquitous metaphor,
the cinematic camera, enables users to tumble, track and
dolly a viewpoint. Various other metaphors have been
explored by researchers, including orbiting and flying
[32]
,
through-the-lens control
[18]
, points and areas of interests
Figure 6. Example StyleCam experience. Top: system components and their reaction to user input. Bottom: what the user sees.
Volume 4, Issue 2
105
[22]
, using constraints [24, 29], drawing a path
[21]
, two-handed
techniques
[1, 38]
, and combinations of techniques
[30, 37]. Bowman et. al. present taxonomies and
evaluations of various schemes
[3, 4]
.
Other techniques involve automatic framing of the areas of
interest as typically found in game console based adventure
games which use a "chase airplane" metaphor for a third
person perspective. Systems that utilize higher degree-of-freedom
input devices offer additional control and
alternative metaphors have been investigated, including
flying
[7, 34]
, eyeball-in-hand
[35]
, and worlds in miniature
[31]
. The major difference between this body of prior
research and our work is that we attempt to give the author
substantially more influence over the types of views and
transitions between them as the user navigates in the virtual
space.
Beyond techniques for navigating the scene, extra
information can also be provided to aid navigation. These
include global maps in addition to local views
[12, 14]
, and
various landmarks
[9, 33]
. Others have investigated
integrating global and local views, using various distorted
spaces including "fisheye" views
[6, 15]
. At present, in an
attempt to keep the visual space uncluttered, our work does
not have mechanisms for providing global information to
the user, however, this is something we may incorporate as
our system progresses.
Approaches which give the author more influence include
guided tours where camera paths are prespecified for the
end user to travel along. Galyean [17] proposes a "river
analogy" where a user, on a metaphorical boat, can deviate
from the guided path, the river, by steering a conceptual
"rudder". Fundamental work by Hanson and Wernert
[19,
36]
proposes "virtual sidewalks" which are authored by
constructing virtual surfaces and specifying gaze direction,
vistas, and procedural events (e.g., fog and spotlights) along
the sidewalk. Our system builds upon the guided tour and
virtual sidewalk ideas but differs by providing authoring
elements that enable a much more stylized experience.
Specifically, we offer a means of presenting 3D, 2D, and
temporal media experiences through a simple, unified,
singular user interaction technique that supports both
spatial and temporal navigation.
Robotic planning algorithms have been used to assist or
automatically create a guided tour of a 3D scene, in some
cases resulting in specific behaviors trying to satisfy goals
and constraints
[10, 11]
. Individual camera framing of a
scene has been used to assist in viewing or manipulation
tasks
[27]
. Rules can be defined for cameras to
automatically frame a scene that follow cinematic
principles such as keeping the virtual actors visible in the
scene; or following the lead actor
[20]
. Yet another system
[2]
allows authors to define storyboard frames and the
system defines a set of virtual cameras in the 3D scene to
support the visual composition. This previous work assists
in the authoring aspects by ceding some control to the
system. Our work too involves some automatic system
control, but we emphasize author control.
Image based virtual reality environments such as
QuicktimeVR
[8]
utilize camera panning and zooming and
allow users to move to defined vista points. The driving
metaphor has also been used for navigating interactive
video, as seen in the Movie-Maps system
[23]
. More
recently, the Steerable Media project
[25]
for interactive
television aims to retain the visual aesthetic of existing
television but increase the level of user interactivity. The
user is given the ability to control the content progression
by seamlessly integrating video with augmented 2D and 3D
graphics. While our goals are similar in that we hope to
enhance the aesthetics of the visual experience, we differ in
that our dominant media type is 3D graphics with
augmented temporal media (animations and visual effects)
and traditional 2D media (video, still images).
Lastly, we note that widely available 3D viewers or
viewing technologies such as VRML, Cult3D, Shockwave,
Viewpoint, Virtools, and Pulse3D, are becoming very
popular but offer the standard camera controls of vista
points, track, tumble, and zoom. We hope our explorations
will ultimately assist in offering new experience and
interaction approaches for future incarnations of these 3D
viewers.
IMPLEMENTATION
StyleCam is implemented using Alias|wavefront's MAYA
3D modeling and animation package. We use MAYA to
author the 3D content to be visualized, the required camera
surfaces, animation clips, and required associations
between them. A custom written MAYA plugin allows the
user to control their view of the 3D content based on their
mouse input and the authored camera surfaces, animation
clips, and associations.
The following description of our implementation assumes
some knowledge of MAYA, although we have endeavoured
to be as general as possible without sacrificing accuracy.
6.1. Authoring
First, money-shots are created by defining a MAYA camera
with specific position, orientation, and other camera
parameters. Then, a camera surface which intersects the
position of the money-shot camera is defined by creating an
appropriate non-trimmed NURBS surface within MAYA.
To include an optional camera look-at point, the author
simply defines a point in 3D space (using a MAYA
locator). Finally, to make these components easily locatable
by the plugin, they are grouped under a named MAYA
node within its dependency graph.
Then, StyleCam animation clips are created as one would
normally create animations in MAYA, using its TRAX
non-linear animation editor. Animation clips at this stage
are given meaningful, consistent, names in order to
facilitate their identification later when associating them
with events.
106
Volume 4, Issue 2
StyleCam allows the author to create scripts and associate
them with events. Supported events are session startup,
camera surface entry, camera surface exit, and camera
surface timeout (Figure 7).
We implement variable control-display gain on a camera
surface (Figure 3) by varying the separation between the
isoparms on the NURBS surface.
As shown in Figure 4, StyleCam supports three types of
transitions: automatic, authored, and slate.
Automatic transitions are those that smoothly move the
camera from one camera surface to another without
requiring any authored animation clips. This is done by
having the system perform quaternion
[28]
interpolation of
camera orientation, combined quaternion and linear
interpolation of camera position, and linear interpolation of
other camera properties such as focal length. Using
quaternion interpolation ensures smooth changes in
orientation while defining a smooth arcing path for the
position. At each time step in the transition, two
quaternions representing the required fractional rotations of
the position and orientation vectors of the camera are
calculated and applied to the source vectors. In addition, the
magnitude of the position vector is adjusted by linear
interpolation between the source and destination position
vector magnitudes. The result is a series of intermediate
camera positions and orientations as Figure 8 illustrates.
Figure 7. StyleCam events
The session startup event is triggered only once when the
user initially begins using StyleCam to view a scene. Exit
events are triggered when the user leaves a camera surface
from one of four directions. Associated scripts can specify
destination camera surfaces and types of transitions to be
performed. Time-out events are triggered when the mouse
is idle for a given duration while on a particular camera
surface, and can be used to launch an automatic
presentation. StyleCam's event and script mechanism
provides for the use of logic to dynamically alter the
presentation. For example, scripts can ensure that some
surfaces are only visited once, while others are shown only
after certain surfaces have already been visited.
6.2. Interaction
When the StyleCam plugin is activated, the first money-shot
of the first camera surface is used as the initial view. If
a look-at point is defined for this camera surface, the
orientation of the user camera is set such that the camera
points directly at the look-at point. Otherwise, the
orientation is set to the normal of the camera surface at the
money-shot viewpoint's position.
Figure 8. Combined quaternion and linear interpolation
Authored transitions involve the playback of preauthored
animation clips. This gives the author complete control
over the user experience during the transition including the
pacing, framing and visual effects.
User's mouse movements and button presses are monitored
by the StyleCam plugin. Mouse drags result in the camera
moving along the current camera surface. Specifically, for a
given mouse displacement (dx, dy), the new position of the
camera on the camera surface (in uv-coordinates local to
the camera surface) is given by
Slate transitions are a special case of authored transitions.
Used to present 2D media, slate transitions are authored by
placing an image plane in front of the camera as it
transitions between camera surfaces. Various visual effects
can be achieved by using multiple image planes
simultaneously and by animating transparency and other
parameters of these image planes. While the slate transition
is in progress, the camera is simultaneously being smoothly
interpolated towards the destination camera surface. This
essentially allows for a "soft" fade from a camera view, to a
slate, and back, as Figure 9 illustrates.
(u1,v1) = (u0,v0) + c*(dx, dy)
where (u0, v0) is the last position of the camera, and c is the
gain constant. If either the u or v coordinate of the resulting
position is not within the range [0,1], the camera has left
the current camera surface. At this point, the author-scripted
logic is executed to determine the next step. First,
the destination money-shot is resolved. Next, an
appropriate transition is performed to move to the next
camera surface.
Volume 4, Issue 2
107
Figure 9. Slate transitions
StyleCam supports temporal control or "scrubbing" of
animations. During navigation mode, the user's mouse
drags control the camera's position on the camera surface.
However, when the user moves off a camera surface into an
animated transition, mouse drags control the (invisible)
timeslider of the animation. Time is advanced when the
mouse is dragged in the same direction that the camera
exited the camera surface and reversed if the directions are
also reversed. When the mouse button is released, the
system takes over time management and smoothly ramps
the time steps towards the animation's original playback
rate.
Our present implementation supports scrubbing only for
automatic transitions. Authored and slate transitions are
currently uninterruptible. There is however no technical
reason why all transitions cannot support scrubbing. In
future versions we intend to give the author the choice of
determining whether or not any given transition is
scrubable. This is important since in some cases it may be
desirable to force the animation to playback uninterrupted
at a certain rate.
EVALUATION
We conducted an informal user study to get a sense of
users' initial reactions to using StyleCam. Seven
participants, three of whom had experience with 3D
graphics applications and camera control techniques, and
four who had never used a 3D application or camera
controls, were asked to explore a 3D car model using
StyleCam. In order to ensure the study resembled our
intended casual usage scenario, we gave participants only
minimal instructions. We explained the click-and-drag
action required to manipulate the camera, a brief rationale
for the study, and to imagine they were experiencing an
interactive advertisement for that car. We did not identify
the various components (camera surfaces, animated
transitions, etc) nor give any details on them. This was
deliberately done so that the participants could experience
these components in action for themselves and give us
feedback without knowing in advance of their existence.
One very promising result was that none of the participants
realized that they were switching between controlling the
camera and controlling the time slider on the animations.
They felt that they had the same type of control throughout,
indicating that our blending between spatial and temporal
control worked remarkably well. Also the simplicity of the
interaction technique essentially a single click and drag
action was immediately understood and usable by all our
users.
Another reaction from all the participants was that, to
varying degrees, they sometimes felt that they were not in
control of the interaction when the uninterruptable
animations occurred. This was particularly acute when the
information in the animations seemed unrelated to their
current view. In these cases, participants indicated that they
had no idea what triggered these animations and were often
annoyed at the sudden interruptions. However when the
information was relevant the interruptions were not as
annoying and often actually appreciated. In some cases
participants indicated that they would have liked to be able
to replay the animation or to have it last longer. This
highlights the importance of carefully authoring the
intermingling of uninterruptable animations with the rest of
the interaction experience.
Participants also indicated that they would have liked the
ability to click on individual parts of the car model in order
to inspect them more closely. This request is not surprising
since we made no effort in our current implement to
support pointing. However, we believe that in future
research StyleCam could be extended to include pointing.
As we expected, all the participants with prior 3D graphics
camera experience stated that they at times would have
liked full control of the camera, in addition to the
constrained control we provided. Participants without this
prior experience, however, did not ask for this directly
although they indicated that there were some areas of the
car model that they would have liked to see but could not
get to. However, this does not necessarily imply full control
of the camera is required. We believe that this issue can be
largely alleviated at the authoring phase by ascertaining
what users want to see for a particular model and ensuring
that those features are accessible via the authored camera
surfaces. Interestingly, the participant with the most 3D
graphics experience commented that the automatic
transitions and smooth camera paths during those
transitions were very good and that "for those who don't
know 3D and stuff, this would be very good"!
DISCUSSION & CONCLUSIONS
Central to our StyleCam system is the integration of spatial
and temporal controls into a single user interaction model.
The implications of this interaction model go far beyond a
simple interaction technique. The blending of spatial and
temporal control presents a completely new issue that an
author needs to understand and consider when creating
these interactive visual experiences. As evident from the
comments of our users, temporal control can feel very
much like spatial control even when scrubbing backwards
108
Volume 4, Issue 2
in an animation when the animation consists of moving the
viewing camera around the central object of interest.
However, if the animation is not around the central object
of interest, for example in some of our slate animations,
temporal control can produce very different sensations.
These include the feeling of moving backwards in time,
interruption of a well paced animation, jarring or ugly
visuals, and sometimes even nonsensical content.
As a result, the author needs to be extremely cognizant of
these artefacts and make design decisions as to when and
where to relinquish control - and how much control - to the
user. At one extreme, the author can specify that certain
animations are completely uninterruptible by the user. In
the experience we authored for our user study, we included
several of these types of transitions. As discussed earlier,
whether users favored this depended heavily on the content.
In other words, in some cases, as authors, we did not make
the right decision. Further improvements could include
partially interruptable animations. For example, we may not
allow movement backwards in time but allow the user to
control the forward pacing. This will largely solve the
nonsensical content problem but may still result in
occasionally jarring visuals.
If we intend to support these various types of control, we
must also be able to set the users' expectations of what type
of control they have at any given time. It is clear that the
current StyleCam switching between spatial and temporal
control without any explicit indication to the user that a
switch is happening works in most cases. In the cases
where it fails, either the visual content itself should indicate
what control is possible, or some explicit mechanism is
required to inform the user of the current or upcoming
control possibilities. In addition to the obvious solution of
using on-screen visual indicators (e.g., changing cursors) to
indicate state, future research could include exploring "hint-ahead"
mechanisms that indicate upcoming content if the
user chooses to stay on their current course of travel. For
example, as the user reaches the edge of a camera surface, a
"voice-over" could say something like "now we're heading
towards the engine of the car". Alternatively, a visual
"signpost" could fade-in near the cursor location to convey
this information. These ideas coincide with research that
states that navigation routes must be discoverable by the
user
[16]
.
It is very clear from our experiences with StyleCam that the
user's viewing experience is highly dependent on the talent
and skill of the author. It is likely that skills from movie
making, game authoring, advertising, and theme park
design would all assist in authoring compelling
experiences. However, we also realize that authoring skills
from these other genres do not necessarily directly translate
due to the unique interaction aspects of StyleCam.
While StyleCam has the appropriate components for
creating compelling visual experiences, it is still currently a
research prototype that requires substantial skills with
MAYA. We envision a more author-friendly tool that is
based on the conceptual model of StyleCam.
Some future avenues that we intend to explore include
supporting soundtracks, extensions to enable pointing to
elements in the 3D scene, and mechanisms for authoring
animation paths using alternate techniques such as
Chameleon
[13]
.
Finally, it is important to note that StyleCam is not limited
to product or automobile visualization. Other domains such
as visualization of building interiors and medical
applications could also utilize the ideas presented in this
paper. Figures 10, 11, and 12 illustrate some examples.
ACKNOWLEDGEMENTS
We thank Scott Guy and Miles Menegon for assistance in
figure and video creation.
REFERENCES
1. Balakrishnan, R., & Kurtenbach, G. (1999). Exploring
bimanual camera control and object manipulation in 3D
graphics interfaces. ACM CHI 1999 Conference on
Human Factors in Computing Systems. p. 56-63.
2. Bares, W., McDermott, S., Boudreaux, C., & Thainimit,
S. (2000). Virtual 3D camera composition from frame
constraints. ACM Multimedia. p. 177-186.
3. Bowman, D.A., Johnson, D.B., & Hodges, L.F. (1997).
Travel in immersive virtual environments. IEEE
VRAIS'97 Virtual Reality Annual International
Symposium. p. 45-52.
4. Bowman, D.A., Johnson, D.B., & Hodges, L.F. (1999).
Testbed environment of virtual environment interaction.
ACM VRST'99 Symposium on Virtual Reality Software
and Technologies. p. 26-33.
5. Buxton, W., ed. Three-state model of graphical input.
Human-computer interaction - INTERACT'90, ed. D.
Diaper. 1990, Elsevier Science Publishers B. V. (North-Holland
): Amsterdam. 449-456.
6. Carpendale, M.S.T., & Montagnese, C.A. (2001). A
framework for unifying presentation space. ACM
UIST'2001 Symposium on User Interface Software and
Technology. p. 61-70.
7. Chapman, D., & Ware, C. (1992). Manipulating the
future: predictor based feedback for velocity control in
virtual environment navigation. ACM I3D'92
Symposium on Interactive 3D Graphics. p. 63-66.
8. Chen, S.E. (1995). QuickTime VR: An image-based
approach to virtual environment navigation. ACM
SIGGRAPH'95 Conference on Computer Graphics and
Interactive Techniques. p. 29-38.
9. Darken, R., & Sibert, J. (1996). Wayfinding strategies
and behaviours in large virtual worlds. ACM CHI'96
Conference on Human Factors in Computing Systems.
p. 142-149.
Volume 4, Issue 2
109
10. Drucker, S.M., Galyean, T.A., & Zeltzer, D. (1992).
CINEMA: A system for procedural camera movements.
ACM Symposium on Interactive 3D Graphics. p. 67-70.
11. Drucker, S.M., & Zeltzer, D. (1994). Intelligent camera
control in a virtual environment. Graphics Interface. p.
190-199.
12. Elvins, T., Nadeau, D., Schul, R., & Kirsh, D. (1998).
Worldlets: 3D thumbnails for 3D browsing. ACM
CHI'98 Conf. on Human Factors in Computing Systems.
p. 163-170.
13. Fitzmaurice, G.W. (1993). Situated information spaces
and spatially aware palmtop computers.
Communications of the ACM, 36(7). p. 38-49.
14. Fukatsu, S., Kitamura, Y., Masaki, T., & Kishino, F.
(1998). Intuitive control of bird's eye overview images
for navigation in an enormous virtual environment.
ACM VRST'98 Sympoisum on Virtual Reality Software
and Technology. p. 67-76.
15. Furnas, G. (1986). Generalized fisheye views. ACM
CHI 1986 Conference on Human Factors in Computing
Systems. p. 16-23.
16. Furnas, G. (1997). Effective view navigation. ACM
CHI'97 Conference on Human Factors in Computing
Systems. p. 367-374.
17. Galyean, T.A. (1995). Guided navigation of virtual
environments. ACM I3D'95 Symposium on Interactive
3D Graphics. p. 103-104.
18. Gliecher, M., & Witkin, A. (1992). Through-the-lens
camera control. ACM SIGGRAPH' Conf. on Computer
Graphics and Interactive Techniques. p. 331-340.
19. Hanson, A.J., & Wernet, E. (1997). Constrained 3D
navigation with 2D controllers. p. 175-182.
20. He, L., Cohen, M.F., & Salesin, D. (1996). The virtual
cinematographer: a paradigm for automatic real-time
camera control and directing. ACM SIGGRAPH'96
Conference on Computer Graphics and Interactive
Techniques. p. 217-224.
21. Igarashi, T., Kadobayashi, R., Mase, K., & Tanaka, H.
(1998). Path drawing for 3D walkthrough. ACM UIST
1998 Symposium on User Interface Software and
Technology. p. 173-174.
22. Jul, S., & Furnas, G. (1998). Critical zones in desert
fog: aids to multiscale navigation. ACM Symposium on
User Interface Software and Technology. p. 97-106.
23. Lippman, A. (1980). Movie-maps: an application of the
optical videodisc to computer graphics. ACM
SIGGRAPH'80 Conference on Computer Graphics and
Interactive Techniques. p. 32-42.
24. Mackinlay, J., Card, S., & Robertson, G. (1990). Rapid
controlled movement through a virtual 3D workspace.
ACM SIGGRAPH 1990 Conference on Computer
Graphics and Interactive Techniques. p. 171-176.
25. Marrin, C., Myers, R., Kent, J., & Broadwell, P. (2001).
Steerable media: interactive television via video
synthesis. ACM Conference on 3D Technologies for the
World Wide Web. p. 7-14.
26. Newman, W. (1968). A system for interactive graphical
programming.
AFIPS Spring Joint Computer
Conference. p. 47-54.
27. Phillips, C.B., Badler, N.I., & Granieri, J. (1992).
Automatic viewing control for 3D direct manipulation.
ACM Symposium on Interactive 3D Graphics. p. 71-74.
28.
Shoemake, K. (1985). Animating rotation with
quartenion curves. ACM SIGGRAPH Conf Computer
Graphics & Interactive Techniques. p. 245-254.
29. Smith, G., Salzman, T., & Stuerzlinger, W. (2001). 3D
Scene manipulation with 2D devices and constraints.
Graphics Interface. p. 135-142.
30. Steed, A. (1997). Efficient navigation around complex
virtual environments. ACM VRST'97 Conference on
Virtual Reality Software and Technology. p. 173-180.
31. Stoakley, R., Conway, M., & Pausch, R. (1995). Virtual
reality on a WIM: Interactive worlds in miniature. ACM
CHI 1995 Conference on Human Factors in Computing
Systems. p. 265-272.
32. Tan, D., Robertson, G., & Czerwinski, M. (2001).
Exploring 3D navigation: combining speed-coupled
flying with orbiting. ACM CHI'2001 Conference on
Human Factors in Computing Systems. p. 418-425.
33. Vinson, N. (1999). Design guidelines for landmarks to
support navigation in virtual environments. ACM
CHI'99 Conference on Human Factors in Computing
Systems. p. 278-285.
34. Ware, C., & Fleet, D. (1997). Context sensitve flying
interface. ACM I3D'97 Symposium on Interactive 3D
Graphics. p. 127-130.
35. Ware, C., & Osborne, S. (1990). Exploration and virtual
camera control in virtual three dimensional
environments. ACM I3D'90 Symposium on Interactive
3D Graphics. p. 175-183.
36. Wernert, E.A., & Hanson, A.J. (1999). A framework for
assisted exploration with collaboration. IEEE
Visualization. p. 241-248.
37. Zeleznik, R., & Forsberg, A. (1999). UniCam - 2D
Gestural Camera Controls for 3D Environments. ACM
Symposium on Interactive 3D Graphics. p. 169-173.
38. Zeleznik, R., Forsberg, A., & Strauss, P. (1997). Two
pointer input for 3D interaction. ACM I3D Symposium
on Interactive 3D Graphics. p. 115-120.
110
Volume 4, Issue 2 | 3D viewers;camera controls;3D navigation;3D visualization;interaction techniques |
185 | Tactons: Structured Tactile Messages for Non-Visual Information Display | Tactile displays are now becoming available in a form that can be easily used in a user interface. This paper describes a new form of tactile output. Tactons, or tactile icons, are structured, abstract messages that can be used to communicate messages non-visually. A range of different parameters can be used for Tacton construction including : frequency, amplitude and duration of a tactile pulse, plus other parameters such as rhythm and location. Tactons have the potential to improve interaction in a range of different areas, particularly where the visual display is overloaded, limited in size or not available, such as interfaces for blind people or in mobile and wearable devices . . This paper describes Tactons, the parameters used to construct them and some possible ways to design them. Examples of where Tactons might prove useful in user interfaces are given. | Introduction
The area of haptic (touch-based) human computer interaction
(HCI) has grown rapidly over the last few years. A
range of new applications has become possible now that
touch can be used as an interaction technique (Wall et al.,
2002). However, most current haptic devices have scant
provision for tactile stimulation, being primarily pro-grammable
, constrained motion force-feedback devices
for kinaesthetic display. The cutaneous (skin-based)
component is ignored even though it is a key part of our
experience of touch (van Erp, 2002). It is, for example,
important for recognising texture, and detecting slip,
compliance and direction of edges. As Tan (1997) says
"In the general area of human-computer interfaces ... the
tactual sense is still underutilised compared with vision
and audition". One reason for this is that, until recently,
the technology for tactile displays was limited.
Tactile displays are not new but they have not received
much attention from HCI researchers as they are often
engineering prototypes or designed for very specific applications (Kaczmarek et al., 1991). They have been used
in areas such as tele-operation or displays for blind people
to provide sensory substitution where one sense is
used to receive information normally received by another
(Kaczmarek et al.). Most of the development of these
devices has taken place in robotics or engineering labs
and has focused on the challenges inherent in building
low cost, high-resolution devices with realistic size,
power and safety performance. Little research has gone
into how they might actually be used at the user interface.
Devices are now available that allow the use of tactile
displays so the time is right to think about how they
might be used to improve interaction.
In this paper the concept of Tactons, or tactile icons, is
introduced as a new communication method to complement
graphical and auditory feedback at the user interface
. Tactons are structured, abstract messages that can be
used to communicate messages non-visually. Conveying
structured messages through touch will be very useful in
areas such as wearable computing where screens are limited
. The paper gives some background to the perception
and use of tactile stimuli and then describes the design of
Tactons. It finishes with examples of potential uses for
Tactons.
Background and previous work
The skin is the largest organ in the body, about 2 m
2
in
the average male (Montagu, 1971). Little direct use is
made of it for displaying information in human-computer
interfaces (Tan and Pentland, 1997, van Erp, 2002), yet a
touch on the hand or other parts of the body is a very rich
experience. The skin can therefore potentially be used as
a medium to communicate information. As a receiving
instrument the skin combines important aspects of the eye
and the ear, with high acuity in both space and time
(Gunther, 2001) giving it good potential as a communication
medium.
The human sense of touch can be roughly split in to two
parts: kinaesthetic and cutaneous. "Kinaesthetic" is often
used as catch-all term to describe the information arising
from forces and positions sensed by the muscles and
joints. Force-feedback haptic devices (such as the
PHANToM from SensAble) are used to present information
to the kinaesthetic sense. Cutaneous perception refers
to the mechanoreceptors contained within the skin, and
includes the sensations of vibration, temperature, pain
and indentation. Tactile devices are used to present feedback
to the cutaneous sense.
15
Current haptic devices use force-feedback to present kinaesthetic
stimuli. This works well for some aspects of
touch (e.g. identifying the geometric properties of objects
) but is poor for features such as texture (normally
perceived cutaneously). Oakley et al. (2000) found that
trying to use texture in a user interface with a force-feedback
device actually reduced user performance. One
reason for this is that the textures had to be made large so
that they could be perceived kinaesthetically, but they
then perturbed users' movements. The use of a tactile
haptic device to present texture would not have this problem
as small indentations in the fingertip would not affect
hand movements. At present, however, there are no haptic
devices that do a good job of presenting both tactile
and force-feedback cues to users.
Current force-feedback devices use a point interaction
model; the user is represented by a single point of contact
corresponding to the tip of a stylus. This is analogous to
exploring the world by remote contact through a stick
thus depriving the user of the rich, spatially varying cutaneous
cues that arise on the finger pad when contacting a
real object (Wall and Harwin, 2001). Users must integrate
temporally varying cues as they traverse the structure of
virtual objects with the single point of contact, which
places considerable demands on short-term memory
(Jansson and Larsson, 2002). Even when exploring simple
geometric primitives, performance is greatly reduced
compared to natural touch. Lederman and Klatzky (1999)
have shown that such removal of cutaneous input to the
fingertip impedes perception of edge direction, which is
an essential component of understanding haptic objects. It
can therefore be seen that tactile feedback and cutaneous
perception are key parts of touch that must be incorporated
into haptic displays if they are to be effective and
usable.
2.1 Vibrotactile actuators
There are two basic types of vibrotactile display device.
These evoke tactile sensations using mechanical vibration
of the skin (usually in the range 10-500Hz) (Kaczmarek
et al., 1991). This is commonly done by vibrating a small
plate pressed against the skin or via a pin or array of pins
on the fingertip. These are very easy to control from standard
PC hardware. Other types of actuator technology
are available, including pneumatic and electrotactile
(Stone, 2000), but these tend to be bulkier and harder to
control so are less useful in many situations.
Figure 1: The pins arrays on the VirTouch tactile
mouse (www.virtouch.com).
The first type of vibrotactile display uses a pin or array of
small pins (e.g. the VirTouch mouse in Figure 1 or those
produced by Summers et al. (2001)) to stimulate the fingertip
. Such devices can present very fine cues for surface
texture, edges, lines, etc. The second type uses larger
point-contact stimulators (e.g. Figure 2 or alternatively
small loudspeaker cones playing tones, or other simple
vibrating actuators placed against the skin as used by Tan
(1997) and in devices such as the CyberTouch glove
www.immersion.com). The cues here are much lower
resolution but can exert more force; they can also be distributed
over the body to allow multiple simultaneous
cues (often mounted in a vest on the user's back or in a
belt around the waist). These devices are both easy to
control and use. For a full review see Kaczmarek et al.
(1991).
Figure 2: Audiological Engineering Corp. VBW32
transducers (www.tactaid.com).
2.2 Previous work on tactile display
One common form of tactile output is Braille, and dynamic
Braille cells are available. A display is made up of
a line of `soft' cells (often 40 or 80), each with 6 or 8 pins
that move up and down to represent the dots of a Braille
cell. The user can read a line of Braille cells by touching
the pins of each cell as they pop up (for more information
see www.tiresias.org). The focus of the work reported
here is not on Braille as it tends to be used mainly for
representing text (although other notations are used, e.g.
music) and the cells are very low resolution (8 pins
maximum). These displays are also very expensive with
an 80 cell display costing around 4000. There have been
many other tactile devices for blind people, such as the
Optacon (TeleSensory Inc.), which used an array of 144
pins to display the input from a camera to the fingertip,
but again these are mainly used for reading text. Pin arrays
produce Braille but can do much more, especially the
higher resolution displays such as shown in Figure 1.
Our research also builds on the work that has been done
on tactile graphics for blind people (this mainly takes the
form of raised lines and dots on special `swell' paper).
Kurze (1997, 1998) and Challis (2001) have developed
guidelines which allow images and objects to be presented
that are understandable through touch by blind
users.
Two other examples show that the cutaneous sense is
very effective for communication. Firstly, Tadoma is a
tactile language used by deaf/blind people. The transmitter
speaks normally and the receiver puts a hand on the
face of the speaker, covering the mouth and neck (Tan
and Pentland, 2001). Tadoma users can listen at very high
16
speeds (normal speaking speed for experts) and pick up
subtleties of the speech such as accent. In the second example
, Geldard (1957) taught participants a simple tactile
language of 45 symbols, using three intensities, three
durations and five locations on the chest. Participants
were able to learn the alphabet quickly and could recognise
up to 38 words per minute in some cases. Other sensory
substitution systems convert sound into vibration for
hearing-impaired people (e.g. the TactAid system from
Audiological Engineering). Again this shows that cutaneous
perception is very powerful and if we can make use
of it at the user interfaces we will have a rich new way to
present information to users.
Research and existing applications have shown that the
cutaneous sense is a very powerful method of receiving
information. Other work has shown that it can be used in
user interfaces and wearable computers (Gemperle et al.,
1998). Tan has begun to investigate the use of tactile displays
on wearable computers (Tan and Pentland, 1997).
She used a 3x3 grid of stimulators on a user's back to
provide navigation information. Informal results suggested
it was useful but no formal evaluation has taken
place. Other relevant work has taken place in aircraft
cockpits to provide pilots with navigation information
(van Veen and van Erp, 2001, Rupert, 2000). In these
examples only simple tactile cues for direction have been
provided. For example, an actuator maybe vibrated on
one side of the body to indicate the direction to turn.
More sophisticated cues could be used to provide much
more information to users without them needing to use
their eyes.
Gunther et al. have used tactile cues to present `musical'
compositions to users (Gunther, 2001, Gunther et al.,
2002). They say: "The approach taken ... views haptic
technologies in particular the vibrotactile stimulator
as independent output devices to be used in conjunction
with the composition and perception of music. Vibrotactile
stimuli are viewed not as signals carrying information
per se, but as aesthetic artifacts themselves". He used an
array of 13 transducers across the body of a `listener' so
that he/she could experience the combined sonic/tactile
presentation. Gunther created a series of compositions
played to listeners who appeared to enjoy them. This
work was artistic in nature so no formal usability assessments
were made but the listeners all liked the experience
.
In order to create a tactile composition (the same is true
for the Tactons described below) a good understanding of
the experience of touch is needed. However, as Gunther
et al. suggest: "It is indeed premature to hammer out the
details of a language for tactile composition. It seems
more productive at this point in time to identify the underpinnings
of such a language, specifically those dimensions
of tactile stimuli that can be manipulated to form
the basic vocabulary elements of a compositional lan-guage"
. Research is needed to gain a more systematic
understanding of cutaneous perception for use in the
presentation of such messages.
Enriquez and MacLean (2003) recently proposed `haptic
icons', which they define as "brief programmed forces
applied to a user through a haptic interface, with the role
of communicating a simple idea in a manner similar to
visual or auditory icons". The problem they are trying to
address is different to that of Tactons, as they say "With
the introduction of "active" haptic interfaces, a single
handle e.g. a knob or a joystick can control several
different and perhaps unrelated functions. These multi-function
controllers can no longer be differentiated from
one another by position, shape or texture... Active haptic
icons, or "hapticons", may be able to solve this problem
by rendering haptically distinct and meaningful sensations
for the different functions". These use one degree-of
-freedom force-feedback devices, rather than tactile
displays, so encode information very differently to Tactons
. They report the construction of a tool to allow a user
to create and edit haptic icons. This is early work and
they do not report results from the use of hapticons in any
interfaces. Their results, however, will be directly relevant
to Tactons.
Tactons
Given that the cutaneous sense is rich and a powerful
communication medium currently little utilised in HCI,
how can we make effective use of it? One approach is to
use it to render objects from the real world more realisti-cally
in virtual environments, for example in improving
the presentation of texture in haptic devices. It could also
be used to improve targeting in desktop interactions along
the lines suggested by Oakley et al. (2000). In this paper
it is suggested that it can additionally be used to present
structured informational messages to users.
Tactons are structured, abstract messages that can be used
to communicate complex concepts to users non-visually.
Shneiderman (1998) defines an icon as "an image, picture
or symbol representing a concept". Tactons can represent
complex interface concepts, objects and actions very con-cisely
. Visual icons and their auditory equivalent earcons
(Blattner et al., 1989, Brewster et al., 1994) are very
powerful ways of displaying information but there is currently
no tactile equivalent. In the visual domain there is
text and its counterpart the icon, the same is true in sound
with synthetic speech and the earcon. In the tactile domain
there is Braille but it has no `iconic' counterpart.
Tactons fill this gap. Icons/Earcons/Tactons form a simple
, efficient language to represent concepts at the user
interface.
Tactons are similar to Braille in the same way that visual
icons are similar to text, or earcons are similar to synthetic
speech. For example, visual icons can convey complex
information in a very small amount of screen space,
much smaller than for a textual description. Earcons convey
information in a small amount of time as compared to
synthetic speech. Tactons can convey information in a
smaller amount of space and time than Braille. Research
will also show which form of iconic display is most suitable
for which type of information. Visual icons are good
for spatial information, earcons for temporal. One property
of Tactons is that they operate both spatially and
temporally so they can complement both icons and earcons
. Further research is needed to understand how these
different types of feedback work together.
17
Using speech as an example from the auditory domain:
presenting information in speech is slow because of its
serial nature; to assimilate information the user must hear
a spoken message from beginning to end and many words
may have to be comprehended before the message can be
understood. With earcons the messages are shorter and
therefore more rapidly heard, speeding up interactions.
The same is true of Tactons when compared to Braille.
Speech suffers from many of the same problems as
graphical text in text-based computer systems, as this is
also a serial medium. Barker & Manji (1989) claim that
an important limitation of text is its lack of expressive
capability: It may take many words to describe a fairly
simple concept. Graphical iconic displays were introduced
that speeded up interactions as users could see a
picture of the thing they wanted instead of having to read
its name from a list (Barker and Manji, 1989). In the
same way, an encoded tactile message may be able to
communicate its information in fewer symbols. The user
feels the Tacton then recalls its meaning rather than having
the meaning described in Braille (or speech or text).
The icon is also (in principle) universal: it means the
same thing in different languages and the Tacton would
have similar universality.
Designing with Tactons
Tactons are created by encoding information using the
parameters of cutaneous perception. The encoding is
similar to that of earcons in sound (Blattner et al., 1989,
Brewster et al., 1994) where each of the musical parameters
(e.g. timbre, frequency, amplitude) is varied to encode
information. Similar parameters can be used for
Tactons (although their relative importance is different).
As suggested by Blattner, short motifs could be used to
represent simple objects or actions and these can then be
combined in different ways to represent more complex
messages and concepts. As Tactons are abstract the mapping
between the Tacton and what it represents must be
learned, but work on earcons has shown that learning can
take place quickly (Brewster, 1998b).
The properties that can be manipulated for Tactons are
similar to those used in the creation of earcons. The parameters
for manipulation also vary depending on the
type of transducer used; not all transducers allow all types
of parameters. The general basic parameters are:
Frequency: A range of frequencies can be used to differentiate
Tactons. The range of 20 1000 Hz is perceivable
but maximum sensitivity occurs around 250 Hz (Gunther
et al., 2002). The number of discrete values that can be
differentiated is not well understood, but Gill (2003) suggests
that a maximum of nine different levels can be used.
As in audition, a change in amplitude leads to a change in
the perception of frequency so this has an impact on the
use of frequency as a cue. The number of levels of frequency
that can be discriminated also depends on whether
the cues are presented in a relative or absolute way. Making
relative comparisons between stimuli is much easier
than absolute identification, which will lead to much
fewer discriminable values, as shown in the work on earcon
design (Brewster et al., 1994).
Amplitude: Intensity of stimulation can be used to encode
values to present information to the user. Gunther (2002)
reports that the intensity range extends to 55 dB above the
threshold of detection; above this pain occurs. Craig and
Sherrick (1982) indicate that perception deteriorates
above 28 dB so this would seem to be a useful maximum.
Gunther (2001) reports that various values, ranging from
0.4dB to 3.2dB, have been reported for the just noticeable
difference (JND) value for intensity. Gill states that that
no more than four different intensities should be used
(Gill, 2003). Again the number of useful discriminable
values will depend on absolute or relative presentation of
stimuli. Due to the interactions between this and frequency
several researchers have suggested that they be
combined into a single parameter to simplify design
Waveform: The perception of wave shape is much more
limited than with the perception of timbre in sound. Users
can differentiate sine waves and square waves but more
subtle differences are more difficult (Gunther, 2001).
This limits the number of different values that can be encoded
and makes this a much less important variable than
it is in earcon design (where it is one of the key variables
).
Duration: Pulses of different durations can encode information
. Gunther (2001) investigated a range of subjective
responses to pulses of different durations. He found that
stimuli lasting less than 0.1 seconds were perceived as
taps or jabs whereas stimuli of longer duration, when
combined with gradual attacks and decays, may be perceived
as smoothly flowing tactile phrases. He suggests
combining duration with alterations in the envelope of a
vibration, e.g. an abrupt attack feels like a tap against the
skin, a gradual attack feels like something rising up out of
the skin.
Rhythm: Building on from duration, groups of pulses of
different durations can be composed into rhythmic units.
This is a very powerful cue in both sound and touch.
Gunther (2001) suggests that differences in duration can
be used to group events when multiple events occur on
the same area of skin.
Specific transducer types allow other parameters to be
used:
Body location: Spatially distributed transducers can encode
information in the position of stimulation across the
body. The choice of body location for vibrotactile display
is important, as different locations have different levels of
sensitivity and spatial acuity. A display may make use of
several body locations, so that the location can be used as
another parameter, or can be used to group tactile stimuli.
The fingers are often used for vibrotactile displays because
of their high sensitivity to small amplitudes and
their high spatial acuity (Craig and Sherrick, 1982). However
, the fingers are often required for other tasks, so
other body locations may be more suitable. Craig and
Sherrick suggest the back, thigh and abdomen as other
suitable body locations. They report that, once subjects
have been trained in vibrotactile pattern recognition on
the back, they can almost immediately recognise the same
patterns when they are presented to the thigh or abdomen.
This transfer also occurs to some extent when patterns are
18
presented to different fingers after training on one finger,
but is not so immediate.
Certain body locations are particularly suitable, or particularly
unsuitable, for certain types of vibrotactile displays
. For example, transducers should not be placed on
or near the head, as this can cause leakage of vibrations
into the ears, resulting in unwanted sounds (Gunther et
al., 2002). An example of a suitable body location is in
Gunther's Skinscape display, where he positions low frequency
transducers on the torso as this is where low frequencies
are felt when loud music is heard.
The method of attaching the transducers to a user's body
is also important. The pressure of the transducer against
the body has a significant effect on the user's perception
of the vibrations. Transducers should rest lightly on the
skin, allowing the user to feel the vibration against the
skin, and to isolate the location of the vibration with ease.
Exerting too much pressure with the transducer against
the user's body will cause the vibrations to be felt in the
bone structure, making them less isolated due to skeletal
conduction. In addition, tightening the straps holding the
transducer to achieve this level of pressure may impede
circulation (Gunther, 2001).
Rupert (2000) suggests using the full torso for displaying
3D information, with 128 transducers distributed over the
body. His system displays information to pilots about the
location of objects around them in 3D space, by stimulating
the transducers at the part of their body corresponding
to the location of the object in 3D space around them.
This could be used to indicate horizons, borders, targets,
or other aircraft.
Spatiotemporal patterns: Related to position and rhythm,
spatial patterns can also be "drawn" on the user's body.
For example, if a user has a 3x3 array of stimulators lo-cated
on his/her back, lines and geometric shapes can be
"drawn" on the back, by stimulating, in turn, the stimulators
that make up that shape. In Figure 3, an `L' shaped
gesture can be drawn by activating the stimulators: 1-4-78
-9 in turn. Patterns can move about the body, varying in
time and location to encode information. Cholewiak
(1996) and Sherrick (1985) have also looked at low-level
perception of distributed tactile cues.
.
Figure 3: "Drawing" an L-shaped gesture.
Now that the basic parameters for Tactons have been described
, we will give some examples of how they might
be designed to convey information. The fundamental design
of Tactons is similar to that of earcons.
4.1 Compound Tactons
A simple set of Tactons could be created as in Figure 4. A
high-frequency pulse that increases in intensity could
represent `Create', a lower frequency pulse that decreases
in intensity could represent `Delete'. A two note falling
Tacton could represent a file and a two rising notes a
folder. The mapping is abstract; there is no intuitive link
between what the user feels and what it represents.
Create
Delete
File
Folder
Create File
Delete Folder
Figure 4: Compound Tactons (after Blattner et al.,
1989).
These Tactons can then be combined to create compound
messages. For example, `create file' or `delete folder'.
The set of basic elements could be extended and a simple
language of tactile elements created to provide feedback
in a user interface.
4.2 Hierarchical Tactons
Tactons could also be combined in a hierarchical way, as
shown in Figure 5. Each Tacton is a node in a tree and
inherits properties from the levels above it. Figure 5
shows a hierarchy of Tactons representing a hypothetical
family of errors. The top of the tree is a family Tacton
which has a basic rhythm played using a sinewave (a different
family of errors would use a different rhythm so
that they are not confused). The rhythmic structure of
Level 2 inherits the Tacton from Level 1 and adds to it. In
this case a second, higher frequency Tacton played with a
squarewave. At Level 3 the tempo of the two Tactons is
changed. In this way a hierarchical structure can be presented
. The other parameters discussed above could be
used to add further levels.
4.3 Transformational Tactons
A third type of Tacton is the Transformational Tacton.
These have several properties, each represented by a different
tactile parameter. For example, if Transformational
Tactons were used to represent files in a computer interface
, the file type could be represented by rhythm, size by
frequency, and creation date by body location. Each file
type would be mapped to a unique rhythm. Therefore,
two files of the same type, and same size, but different
creation date would share the same rhythm and frequency
, but would be presented to a different body location
. If two files were of different types but the same size
they would be represented by different rhythms with the
same frequency.
19
Uses for Tactons
We are interested in three areas of use for Tactons, although
there are many others where they have potential to
improve usability.
5.1 Enhancements of desktop interfaces
The first, and simplest, area of interest is in the addition
of Tactons to desktop graphical interfaces. The addition
of earcons to desktops has shown many advantages in
terms of reduced errors, reduced times to complete tasks
and lowered workload (Brewster, 1998a). One problem
with audio is that users believe that it may be annoying to
use (although no research has actually shown this to be
the case) and it has the potential to annoy others nearby
(for a discussion see (Brewster, 2002)). The addition of
Tactons to widgets has the same potential to indicate usability
problems but without the potential to annoy.
One reason for enhancing standard desktop interfaces is
that users can become overloaded with visual information
on large, high-resolution displays. In highly complex
graphical displays users must concentrate on one part of
the display to perceive the visual feedback, so that feedback
from another part may be missed. This becomes
very important in situations where users must notice and
deal with large amounts of dynamic data or output from
multiple applications or tasks. If information about secondary
tasks was presented through touch then users
could concentrate their visual attention on the primary
one but feel information about the others.
As a simple example, the display of a progress bar widget
could be presented tactually. Two sets of tactile pulses
could be used to indicate the current and end points of a
download. The time between the two pulses would indicate
the amount of time remaining, the closer the two
pulses the nearer the download is to finishing. The two
pulses could use different waveforms to ensure they were
not confused. Different rhythms for each pulse could be
used to indicate different types of downloads. If a more
sophisticated set of transducers on a belt around the waist
was available then the position of a pulse moving around
the body in a clockwise direction (starting from the front)
would give information about progress: when the pulse
was at the right side of the body the download would be
25% of the way through, when it was on the left hand
side 75%, and when it got back around to the front it
would be finished. There would be no need for any visual
presentation of the progress bar, allowing users to focus
their visual attention on the main task they are involved
with.
Tactons could also be used to enhance interactions with
buttons, scrollbars, menus, etc. to indicate when users are
on targets and when certain types of errors occur. Others
have shown that basic tactile feedback can improve
pointing and steering type interactions (Akamatsu et al.,
1995, Campbell et al., 1999). There are some commercial
systems that give simple tactile feedback in desktop user
interfaces, e.g. the software that comes with the Logitech
iFeel mouse (www.logitech.com). This provides basic
targeting: a brief pulse is played, for example, when a
user moves over a target. We believe there is much more
that can be presented with tactile feedback.
5.2 Visually impaired users
Tactons will be able to work alongside Braille in tactile
displays for blind and visually impaired users, in the same
way as earcons work alongside synthetic speech. They
will allow information to be delivered more efficiently. In
addition, hierarchical Tactons could help users navigate
Sine
Sine
Square
Error
Operating system error
Execution error
Sine
Square
Overflow
Sine
Square
Underflow
Sine
Square
Fast tempo
Slow tempo
Figure 5: Hierarchical Tacton composition.
Level 1
Level 2
Level 3
20
around Braille media by providing navigation information
(Brewster, 1998b).
One of our main interests is in using Tactons to improve
access to graphical information non-visually. Text can be
rendered in a relatively straightforward manner by speech
or Braille, but graphics are more problematic. One area
that we and others have focused on is visualisation for
blind people. Understanding and manipulating information
using visualisations such as graphs, tables, bar charts
and 3D plots is very common for sighted people. The
skills needed are learned early in school and then used
throughout life, for example, in analysing information or
managing home finances. The basic skills needed for creating
and manipulating graphs are necessary for all parts
of education and employment. Blind people have very
restricted access to information presented in these visual
ways (Edwards, 1995). As Wise et al. (2001) say "Inac-cessibility
of instructional materials, media, and technologies
used in science, engineering, and mathematics
education severely restricts the ability of students with
little or no sight to excel in these disciplines". To allow
blind people to gain the skills needed for the workplace
new technologies are necessary to make visualisations
usable. Tactons provide another route through which information
can be presented.
Research has shown that using haptic devices is an effective
way of presenting graphical information non-visually
(Yu and Brewster, 2003, Wies et al., 2001, Van Scoy et
al., 2000). The most common approach has been to use
haptic devices to present graphs, tables or 3D plots that
users can feel kinaesthetically by tracing a line or shape
with a finger using a device like the PHANToM
(www.sensable.com). Lederman and Klatzky (1999)
have shown that removal of cutaneous input to the fingertip
impedes perception of edge direction, which is an essential
component of tracing a haptic line graph. This lack
of cutaneous stimulation leads to problems with navigation
(exploring using a single point of contact means it is
difficult to locate items as there is no context, which can
be given in a tactile display), exploring small scale features
(these would be perceived cutaneously on the finger
pad in real life), and information overload (all haptic information
is perceived kinaesthetically rather than being
shared with cutaneous perception). Incorporating a tactile
display into a force-feedback device will alleviate many
of these problems and potentially increase user efficiency
and comprehension of visualisations.
Tactons could be presented as the user moves the force-feedback
device over the visualisation. Dimensions of the
data can be encoded into a Tacton to give information
about the current point, using the parameters described in
Section 4. This would allow more data to be presented
more efficiently. For example, with multidimensional
data one dimension might be mapped to the frequency of
a pulse in a Tacton, another might map to rhythm and
another to body locatoin. As the user moves about the
data he/she would feel the different parameters. In addition
to the finger pad, we can also include tactile displays
to other parts of the body (e.g. to the back) using spatially
distributed transducers to provide even more display area.
As long as this is done in a comprehensible manner users
will be able to gain access to their data in a much more
effective way than with current force-feedback only visualisation
tools.
5.3 Mobile and wearable devices
Our other main application area is mobile and wearable
device displays (for both sighted and blind people). Mobile
telephones and handheld computers are currently one
of the fastest growth areas of computing and this growth
will extend into more sophisticated, fully wearable computers
in the future. One problem with these devices is
their limited output capabilities. Their small displays easily
become cluttered with information and widgets and
this makes interface design difficult. In addition, users are
not always looking at the display of a device as they must
walk or navigate through their environment which requires
visual attention. One way to solve this problem is
to use other display modalities and so reduce demands on
visual display, or replace it if not available. Work has
gone into using speech and non-speech sounds to overcome
the display bottleneck. Tactile displays have great
potential here too but are much less well investigated.
Sound has many advantages but it can be problematic; in
loud environments it can be impossible to hear auditory
output from a device, in quiet places the audio may be
disturbing to others nearby. Blind people often do not like
to wear headphones when outdoors as they mask important
environmental sounds. Tactile displays do not suffer
from these problems (although there may be other problems
for example, perceiving tactile stimuli whilst running
due to the difficulties of keeping the transducers in
contact with the skin). Mobile telephones commonly have
a very simple point-contact tactile stimulator built-in that
can alert the user to a call. These are often only able to
produce pulses of different durations. A pin array would
be possible on such a device as the user will be holding it
in a hand when in use. Such a sophisticated tactile display
could do much more, e.g. it could give information on the
caller, replace or enhance items on the display (like icons,
progress indicators, games) or aid in the navigation of the
devices' menus so that the user does not need to look at
the screen.
In a wearable device users could have body mounted
transducers so that information can be displayed over
their body. In the simplest case this could be used to give
directional information by vibrating one side of the body
or other to indicate which way to turn (Tan and Pentland,
1997). A belt of transducers around the waist could give a
compass-like display of direction; a pulse could be played
continuously at north so the user can maintain orientation
after turning (useful when navigating in the dark) or at the
position around the waist corresponding to the direction
in which to head. A more sophisticated display might
give information about the user's context. For example,
presenting Tactons describing information such as the
type of building (shop, bank, office-block, house), the
type of shop (clothes, phones, food, furniture) the price-bracket
of a shop (budget, mid-range, expensive), or information
more related to the concerns of visually impaired
people, such as the number of stairs leading up to
the entrance (for firefighters, whose vision is impaired
21
due to smoke and flames, a tactile display could also provide
information on the location of rooms and exits in a
burning building). A tactile display could also present
information on stock market data (building on from the
work on tactile visualisation in the section above) so that
users could keep track of trades whilst away from the
office. Such tactile displays could also work alongside
auditory or visual ones.
Future work and conclusions
This paper has laid out some of the foundations of information
display through Tactons. There is still much work
to be done to fully understand how they should be designed
and used. There are many lower level perceptual
questions to be addressed before higher level design issues
can be investigated. Many of the parameters of touch
described in Section 4 are not fully understood and the
full usable ranges of the parameters are not known. Studies
need to be undertaken to explore the parameter space
so that the relative importance of the different parameters
can be discovered.
Once the range of parameters is understood then the construction
of Tactons can be examined. Basic studies are
needed to understand how the parameters can be combined
to construct Tactons. Parameters which work well
alone may not work well when combined with others into
a Tacton. For example, one parameter may mask another.
When the basic design of Tactons is understood the composition
of simple Tactons into more complex messages,
encoding hierarchical information into Tactons, and their
learnability and memorability can be investigated. The
concurrent presentation of multiple Tactons must also be
studied. These studies will answer some of the main questions
regarding the usability of Tactons and a good understanding
of their design and usability will have been a-chieved
.
Another important task is to investigate the strong relationships
between hearing and touch by examining cross-modal
uses of audio and tactile multimodal displays
(Spence and Driver, 1997), e.g. combined audio and tactile
cues, redundant tactile and audio cues, and moving
from an audio to a tactile presentation of the same information
(and vice versa). This is important in a mo-bile/wearable
context because at different times different
display techniques might be appropriate. For example,
audio might be inappropriate in a very noisy environment
, or tactile cues might be masked when the user is
running. One important issue is to identify the types of
information best presented in sound and those best presented
tactually. For example, the range of the vibrotactile
frequency response is roughly 20 times less than that
of the auditory system. Such discrepancies must be accounted
for when performing cross-modal mappings from
hearing to touch.
In conclusion, this paper has proposed a new form of tactile
output called Tactons. These are structured tactile
messages that can be used to communicate information.
Tactile output is underused in current interfaces and Tactons
provide a way of addressing this problem. The basic
parameters have been described and design issues discussed
. A technique is now available to allow tactile display
to form a significant part of the set of interaction and
display techniques that can be used to communicate with
users at the interface.
Acknowledgements
This research was conducted when Brewster was on sabbatical
in the Department of Computer Science at the
University of Canterbury, Christchurch, New Zealand.
Thanks to Andy Cockburn for his thoughts and comments
on this work. The sabbatical was funded by an Erskine
Fellowship from the University of Canterbury. The work
was part funded by EPSRC grant GR/S53244. Brown is
funded by an EPSRC studentship.
References
Akamatsu, M., MacKenzie, I. S. and Hasbrouq, T.
(1995): A comparison of tactile, auditory, and visual
feedback in a pointing task using a mouse-type device
. Ergonomics, 38, 816-827.
Barker, P. G. and Manji, K. A. (1989): Pictorial dialogue
methods. International Journal of Man-Machine Studies
, 31, 323-347.
Blattner, M., Sumikawa, D. and Greenberg, R. (1989):
Earcons and icons: Their structure and common design
principles. Human Computer Interaction, 4, 11-44
.
Brewster, S. A. (1998a): The design of sonically-enhanced
widgets. Interacting with Computers, 11,
211-235.
Brewster, S. A. (1998b): Using Non-Speech Sounds to
Provide Navigation Cues. ACM Transactions on
Computer-Human Interaction, 5, 224-259.
Brewster, S. A. (2002): Chapter 12: Non-speech auditory
output. In The Human Computer Interaction Handbook
(Eds, Jacko, J. and Sears, A.) Lawrence Erlbaum
Associates, pp. 220-239.
Brewster, S. A., Wright, P. C. and Edwards, A. D. N.
(1994): A detailed investigation into the effectiveness
of earcons. In Auditory Display (Ed, Kramer, G.) Addison
-Wesley, Reading, MA, pp. 471-498.
Campbell, C., Zhai, S., May, K. and Maglio, P. (1999):
What You Feel Must Be What You See: Adding Tactile
Feedback to the Trackpoint. Proceedings of IFIP
INTERACT'99, Edinburgh, UK, 383-390, IOS Press
Challis, B. and Edwards, A. D. N. (2001): Design principles
for tactile interaction. In Haptic Human-Computer
Interaction, Vol. 2058 (Eds, Brewster, S.
A. and Murray-Smith, R.) Springer LNCS, Berlin,
Germany, pp. 17-24.
Cholewiak, R. W. and Collins, A. (1996): Vibrotactile
pattern discrimination and communality at several
body sites. Perception and Psychophysics, 57, 724-737
.
Craig, J. C. and Sherrick, C. E. (1982): Dynamic Tactile
Displays. In Tactual Perception: A Sourcebook (Ed,
Foulke, E.) Cambridge University Press, pp. 209-233.
22
Edwards, A. D. N. (Ed.) (1995) Extra-Ordinary Human-Computer
Interaction, Cambridge University Press,
Cambridge, UK.
Enriquez, M. J. and Maclean, K. (2003): The Hapticon
editor: A tool in support of haptic communication research
. Haptics Symposium 2003, Los Angeles, CA,
356-362, IEEE Press
Geldard, F. A. (1957): Adventures in tactile literacy. The
American Psychologist, 12, 115-124.
Gemperle, F., Kasabach, C., Stivoric, J., Bauer, M. and
Martin, R. (1998): Design for wearability. Proceedings
of Second International Symposium on Wearable
Computers, Los Alamitos, CA, 116-122, IEEE Computer
Society
Gill, J. (2003), Vol. 2003 Royal National Institute of the
Blind, UK.
Gunther, E. (2001): Skinscape: A Tool for Composition in
the Tactile Modality. Massachusetts Institute of Technology
. Masters of Engineering.
Gunther, E., Davenport, G. and O'Modhrain, S. (2002):
Cutaneous Grooves: Composing for the Sense of
Touch. Proceedings of Conference on New Instruments
for Musical Expression, Dublin, IR, 1-6,
Jansson, G. and Larsson, K. (2002): Identification of
Haptic Virtual Objects with Differing Degrees of
Complexity. Proceedings of Eurohaptics 2002, Edinburgh
, UK, 57-60, Edinburgh University
Kaczmarek, K., Webster, J., Bach-y-Rita, P. and Tomp-kins
, W. (1991): Electrotacile and vibrotactile displays
for sensory substitution systems. IEEE Transaction
on Biomedical Engineering, 38, 1-16.
Kurze, M. (1997): Rendering drawings for interactive
haptic perception. Proceedings of ACM CHI'97, Atlanta
, GA, 423-430, ACM Press, Addison-Wesley
Kurze, M. (1998): TGuide: a guidance system for tactile
image exploration. Proceedings of ACM ASSETS '98,
Marina del Rey, CA, ACM Press
Lederman, S. J. and Klatzky, R. L. (1999): Sensing and
Displaying Spatially Distributed Fingertip Forces in
Haptic Interfaces for Teleoperator and Virtual Environment
Systems. Presence: Teleoperators and Virtual
Environments, 8, 86-103.
Montagu, A. (1971): Touching: The Human Significance
of the Skin, Columbia University Press, New York.
Oakley, I., McGee, M., Brewster, S. A. and Gray, P. D.
(2000): Putting the feel in look and feel. Proceedings
of ACM CHI 2000, The Hague, Netherlands, 415-422,
ACM Press, Addison-Wesley
Rupert, A. (2000): Tactile situation awareness system:
proprioceptive prostheses for sensory deficiencies.
Aviation, Space and Environmental Medicine, 71, 92-99
.
Sherrick, C. (1985): A scale for rate of tactual vibration.
Journal of the Acoustical Society of America, 78.
Shneiderman, B. (1998): Designing the user interface, 3
rd
Ed. Addison-Wesley, Reading (MA).
Spence, C. and Driver, J. (1997): Cross-modal links in
attention between audition, vision and touch: implications
for interface design. International Journal of
Cognitive Ergonomics, 1, 351-373.
Stone, R. (2000): Haptic feedback: A potted history, from
telepresence to virtual reality. The First International
Workshop on Haptic Human-Computer Interaction,
Glasgow, UK, 1-7, Springer-Verlag Lecture Notes in
Computer Science
Summers, I. R., Chanter, C. M., Southall, A. L. and
Brady, A. C. (2001): Results from a Tactile Array on
the Fingertip. Proceedings of Eurohaptics 2001, Birmingham
, UK, 26-28, University of Birmingham
Tan, H. Z. and Pentland, A. (1997): Tactual Displays for
Wearable Computing. Proceedings of the First International
Symposium on Wearable Computers, IEEE
Tan, H. Z. and Pentland, A. (2001): Chapter 18: Tactual
displays for sensory substitution and wearable computers
. In Fundamentals of wearable computers and
augmented reality (Eds, Barfield, W. and Caudell, T.)
Lawrence Erlbaum Associates, Mahwah, New Jersey,
pp. 579-598.
van Erp, J. B. F. (2002): Guidelines for the use of active
vibro-tactile displays in human-computer interaction.
Proceedings of Eurohaptics 2002, Edinburgh, UK,
18-22, University of Edinburgh
Van Scoy, F., Kawai, T., Darrah, M. and Rash, C. (2000):
Haptic Display of Mathematical Functions for Teaching
Mathematics to Students with Vision Disabilities:
Design and Proof of Concept. Proceedings of the First
Workshop on Haptic Human-Computer Interaction,
Glasgow, UK, University of Glasgow
van Veen, H. and van Erp, J. B. F. (2001): Tactile information
presentation in the cockpit. In Haptic Human-Computer
Interaction (LNCS2058), Vol. 2058 (Eds,
Brewster, S. A. and Murray-Smith, R.) Springer, Berlin
, Germany, pp. 174-181.
Wall, S. A. and Harwin, W. S. (2001): A High Bandwidth
Interface for Haptic Human Computer Interaction.
Mechatronics. The Science of Intelligent Machines.
An International Journal, 11, 371-387.
Wall, S. A., Riedel, B., Crossan, A. and McGee, M. R.
(Eds.) (2002) Eurohaptics 2002 Conference Proceedings
, University of Edinburgh, Edinburgh, Scotland.
Wies, E., Gardner, J., O'Modhrain, S., Hasser, C. and
Bulatov, V. (2001): Web-based touch display for accessible
science education. In Haptic Human-Computer
Interaction, Vol. 2058 (Eds, Brewster, S.
A. and Murray-Smith, R.) Springer LNCS, Berlin, pp.
52-60. | tactile displays;multimodal interaction;Tactons;non-visual cues |
186 | TCP/IP Performance over 3G Wireless Links with Rate and Delay Variation | Wireless link losses result in poor TCP throughput since losses are perceived as congestion by TCP, resulting in source throttling. In order to mitigate this effect, 3G wireless link designers have augmented their system with extensive local retransmission mechanisms. In addition, in order to increase throughput, intelligent channel state based scheduling have also been introduced. While these mechanisms have reduced the impact of losses on TCP throughput and improved the channel utilization, these gains have come at the expense of increased delay and rate variability. In this paper, we comprehensively evaluate the impact of variable rate and variable delay on long-lived TCP performance. We propose a model to explain and predict TCP's throughput over a link with variable rate and/or delay. We also propose a network-based solution called Ack Regulator that mitigates the effect of variable rate and/or delay without significantly increasing the round trip time, while improving TCP performance by up to 40%. | INTRODUCTION
Third generation wide-area wireless networks are currently
being deployed in the United States in the form of 3G1X
technology [10] with speeds up to 144Kbps. Data-only enhancements
to 3G1X have already been standardized in the
3G1X-EVDO standard (also called High Data Rate or HDR)
with speeds up to 2Mbps [6]. UMTS [24] is the third generation
wireless technology in Europe and Asia with deploy-ments
planned this year. As these 3G networks provide pervasive
internet access, good performance of TCP over these
wireless links will be critical for end user satisfaction.
While the performance of TCP has been studied extensively
over wireless links [3, 4, 15, 20], most attention has
been paid to the impact of wireless channel losses on TCP.
Losses are perceived as congestion by TCP, resulting in
source throttling and very low net throughput.
In order to mitigate the effects of losses, 3G wireless link
designers have augmented their system with extensive local
retransmission mechanisms. For example, link layer retransmission
protocols such as RLP and RLC are used in
3G1X [22] and UMTS [21], respectively. These mechanisms
ensure packet loss probability of less than 1% on the wireless
link, thereby mitigating the adverse impact of loss on TCP.
While these mechanisms mitigate losses, they also increase
delay variability. For example, as we shall see in Section 3,
ping latencies vary between 179ms to over 1 second in a
3G1X system.
In addition, in order to increase throughput, intelligent
channel state based scheduling have also been introduced.
Channel state based scheduling [7] refers to scheduling techniques
which take the quality of wireless channel into account
while scheduling data packets of different users at the
base station.
The intuition behind this approach is that
since the channel quality varies asynchronously with time
due to fading, it is preferable to give priority to a user with
better channel quality at each scheduling epoch.
While
strict priority could lead to starvation of users with inferior
channel quality, a scheduling algorithm such as proportional
fair [6] can provide long-term fairness among different
users. However, while channel-state based scheduling improves
overall throughput, it also increases rate variability.
Thus, while the impact of losses on TCP throughput have
been significantly reduced by local link layer mechanisms
and higher raw throughput achieved by channel-state based
scheduling mechanisms, these gains have come at the expense
of increased delay and rate variability. This rate and
delay variability translates to bursty ack arrivals (also called
ack compression) at the TCP source. Since TCP uses ack
clocking to probe for more bandwidth, bursty ack arrival
leads to release of a burst of packets from the TCP source.
When this burst arrives at a link with variable rate or delay
, it could result in multiple packet losses. These multiple
losses significantly degrade TCPs throughput.
In this paper, we make three main contributions. First,
71
we comprehensively evaluate the impact of variable rate and
variable delay on long-lived TCP performance. Second, we
propose a model to explain and predict TCP's throughput
over a link with variable rate and/or delay. Third, we propose
a network-based solution called Ack Regulator that mitigates
the effect of variable rate and/or delay without significantly
increasing the round trip time, thereby improving
TCP performance.
The remaining sections are organized as follows. In Section
2, we discuss related work. In Section 3, we present the
motivation for our work using traces from a 3G1X system.
In Section 4, we describe a model for computing the throughput
of a long-lived TCP flow over links with variable rate
and variable delay. We then present a simple network-based
solution, called Ack Regulator, to mitigate the effect of variable
rate/delay in Section 5. In Section 6, we present extensive
simulation results that compare TCP performance with
and without Ack Regulator, highlighting the performance
gains using the Ack Regulator when TCP is subjected to
variable rate and delay. Finally, in Section 7, we present
our conclusions.
RELATED WORK
In this section, we review prior work on improving TCP
performance over wireless networks. Related work on the
modeling of TCP performance is presented in Section 4.
A lot of prior work has focused on avoiding the case of
a TCP source misinterpreting packet losses in the wireless
link as congestion signals. In [4], a snoop agent is introduced
inside the network to perform duplicate ack suppression and
local retransmissions on the wireless link to enhance TCP
performance. In [3], the TCP connection is split into two
separate connections, one over the fixed network and the
second over the wireless link. The second connection can
recover from losses quickly, resulting in better throughput.
Link-layer enhancements for reducing wireless link losses including
retransmission and forward error correction have
been proposed in [20]. Link layer retransmission is now part
of both the CDMA2000 and UMTS standards [10, 24]. In
order to handle disconnections (a case of long-lived loss),
M-TCP has been proposed [8]. The idea is to send the last
ack with a zero-sized receiver window so that the sender can
be in persist mode during the disconnection. Link failures
are also common in Ad Hoc networks and techniques to improve
TCP performance in the presence of link failures have
been proposed in [11]. Note that none of these approaches
address specifically the impact of delay and rate variation
on TCP, which is the focus of this paper.
Several generic TCP enhancements with special applica-bility
to wireless links are detailed in [12, 13]. These include
enabling the Time Stamp option, use of large window
size and window scale option, disabling Van Jacobson
header compression, and the use of Selective Acknowledgments
(Sack). Large window size and window scaling are
necessary because of the large delay of wireless link while
Sack could help TCP recover from multiple losses without
the expensive timeout recovery mechanism.
Another issue with large delay variation in wireless links
is spurious timeouts where TCP unnecessarily retransmits
a packet (and lowers its congestion window to a minimum)
after a timeout, when the packet is merely delayed. In [13],
the authors refer to rate variability due to periodic allocation
and de-allocation of high-speed channels in 3G networks
as Bandwidth Oscillation. Bandwidth Oscillation can
also lead to spurious timeouts in TCP because as the rate
changes from high to low, the rtt value increases and a low
Retransmission Timeout (RTO) value causes a spurious retransmission
and unnecessarily forces TCP into slow start.
In [15], the authors conduct experiments of TCP over GSM
circuit channels and show that spurious timeouts are extremely
rare. However, 3G wireless links can have larger
variations than GSM due to processing delays and rate variations
due to channel state based scheduling.
Given the
increased variability on 3G packet channels, the use of TCP
time stamp option for finer tracking of TCP round trip times
and possibly the use of Eifel retransmission timer [16] instead
of the conventional TCP timer can help avoid spurious
timeouts.
As mentioned earlier, the effect of delay and rate variability
is ack compression and this results in increased burstiness
at the source.
Ack compression can also be caused
by bidirectional flows over regular wired networks or single
flow over networks with large asymmetry. This phenomenon
has been studied and several techniques have been proposed
to tackle the burstiness of ack compression.
In order to
tackle burstiness, the authors in [18] propose several schemes
that withholds acks such that there is no packet loss at the
bottleneck router, resulting in full throughput. However,
the round trip time is unbounded and can be very large.
In [23], the authors implement an ack pacing technique at
the bottleneck router to reduce burstiness and ensure fairness
among different flows. In the case of asymmetric channels
, solutions proposed [5] include ack congestion control
and ack filtering (dropping of acks), reducing source burstiness
by sender adaptation and giving priority to acks when
scheduling inside the network. However, the magnitude of
asymmetry in 3G networks is not large enough and can be
tolerated by TCP without ack congestion control or ack filtering
according to [12].
Note that, in our case, ack compression occurs because
of link variation and not due to asymmetry or bidirectional
flows. Thus, we require a solution that specifically adapts
to link variation. Moreover, the node at the edge of the 3G
wireless access network is very likely to be the bottleneck
router (given rates of 144Kbps to 2Mbps on the wireless
link) and is the element that is exposed to varying delays and
service rates. Thus, this node is the ideal place to regulate
the acks in order to improve TCP performance.
This is
discussed in more detail in the next section.
MOTIVATION
MD
RNC
RNC
BS
BS
PDSN/
SGSN
BS: Base Station
MD: Mobile Device
HA/
GGSN
HA: Home Agent
RNC: Radio Network Controller
PDSN: Packet Data Service Node
SGSN: Serving GPRS Service Node
GGSN: Gateway GPRS Service Node
(RLP/RLC) Link Layer Retransmission
INTER
NET
Figure 1:
3G network architecture
A simplified architecture of a 3G wireless network is shown
72
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
200
400
600
800
1000
1200
1400
Prob.
Ping latency (ms)
Figure 2:
CDF of Ping Latencies
in Figure 1. The base stations are connected to a node called
the Radio Network Controller (RNC). The RNC performs
CDMA specific functions such as soft handoffs, encryption,
power control etc. It also performs link layer retransmission
using RLP(RLC) in 3G1X(UMTS) system. In the 3G1X
system, the RNC is connected to a PDSN using a GRE tunnel
(one form of IP in IP tunnel) and the PDSN terminates
PPP with the mobile device. If Mobile IP service is enabled,
the PDSN also acts as a Foreign Agent and connects to a
Home Agent. In the UMTS system, the RNC is connected
to a SGSN using a GTP tunnel (another form of IP in IP
tunnel); the SGSN is connected to a GGSN, again through
a GTP tunnel. Note that the tunneling between the various
nodes allows for these nodes to be connected directly or
through IP/ATM networks.
In this architecture, the RNC receives a PPP/IP packet
through the GRE/GTP tunnel from the PDSN/SGSN. The
RNC fragments this packet into a number of radio frames
and then performs transmission and local retransmission of
these radio frames using the RLP(RLC) protocol. The base
station (BS) receives the radio frames from the RNC and
then schedules the transmission of the radio frames on the
wireless link using a scheduling algorithm that takes the
wireless channel state into account. The mobile device receives
the radio frames and if it discovers loss of radio frames,
it requests local retransmission using the RLP(RLC) protocol
. Note that, in order to implement RLP(RLC), the RNC
needs to keep a per-user queue of radio frames. The RNC
can typically scale up to tens of base stations and thousands
of active users.
In order to illustrate the variability seen in a 3G system,
we obtained some traces from a 3G1X system. The system
consisted of an integrated BS/RNC, a server connected to
the RNC using a 10Mbps Ethernet and a mobile device connected
to the BS using a 3G1X link with 144Kbps downlink
in infinite burst mode and 8Kbps uplink. The infinite burst
mode implies that the rate is fixed and so the system only
had delay variability.
Figure 2 plots the cumulative distribution function (cdf)
of ping latencies from a set of 1000 pings from the server to
the mobile device (with no observed loss). While about 75%
of the latency values are below 200ms, the latency values go
all the way to over 1s with about 3% of the values higher
than 500ms.
In the second experiment, a TCP source at the server using
Sack with timestamp option transferred a 2MB file to
the mobile device. The MTU was 1500 bytes with user data
size of 1448 bytes. The buffer at the RNC was larger than
the TCP window size
1
. and thus, the transfer resulted in
no TCP packet loss and a maximal throughput of about
1
We did not have control over the buffer size at the RNC in
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Prob.
Interack time Time (s)
0
0.5
1
1.5
2
2.5
3
3.5
0
20
40
60
80
100
120
Rtt (s)
Time (s)
(a)
TCP Ack Inter-arrival
(b) TCP rtt value
Figure 3:
3G Link Delay Variability
135Kb/s. The transmission time at the bottleneck link is
1.448
8/135 = 86ms. If the wireless link delay were constant
, the TCP acks arriving at the source would be evenly
spaced with a duration of 172ms because of the delayed ack
feature of TCP (every 2 packets are acked rather than every
packet). Figure 3(a) plots the cdf of TCP ack inter-arrival
time (time between two consecutive acks) at the server. As
can be seen, there is significant ack compression with over
10% of the acks arriving within 50ms of the previous ack.
Note that the ack packet size is 52 bytes (40 + timestamp)
and ack transmission time on the uplink is 52
8/8=52ms;
an interack spacing of less then 52ms is a result of uplink
delay variation.
Note that the delay variability and the resulting ack compression
did not cause any throughput degradation in our
system. This was due to the fact that the buffering in the
system was greater than the TCP window size resulting in
no buffer overflow loss. Figure 3(b) depicts the TCP round
trip time (rtt) values over time. Since the buffer at the RNC
is able to accommodate the whole TCP window, the rtt increases
to over 3s representing a case of over 30 packets in
the buffer at the RNC (30
0.086 = 2.5s). Given an average
ping latency of 215ms and a transmission time of 86ms
for a 1500 byte packet, the bandwidth delay product of the
link is approximately (0.215 + 0.86)
135=5KB or about 3
packets. Thus, the system had a buffer of over 10 times the
bandwidth delay product. Given that we had only one TCP
flow in the system, a buffer of over 64KB is not a problem.
But, if every TCP flow is allocated a buffer of 64KB, the
buffer requirements at the RNC would be very expensive,
since the RNC supports thousands of active users.
Even discounting the cost of large buffers, the inflated rtt
value due to the excessive buffering has several negative consequences
as identified in [15]. First, an inflated rtt implies
a large retransmission timeout value (rto). In the case of
multiple packet losses (either on the wireless link or in a
router elsewhere in the network), a timeout-based recovery
would cause excessive delay, especially if exponential backoff
gets invoked. Second, if the timestamp option is not used,
the rtt sampling rate is reduced and this can cause spurious
timeouts. Third, there is a higher probability that the data
in the queue becomes obsolete (for e.g., due to user aborting
the transmission), but the queue will still have to be drained
resulting in wasted bandwidth.
Thus, while excessive buffering at the RNC can absorb
the variability of the wireless links without causing TCP
throughput degradation, it has significant negative side effects
, making it an undesirable solution.
our system.
73
MODEL
In this section, we model the performance of a single long-lived
TCP flow over a network with a single bottleneck server
that exhibits rate variation based on a given general distribution
and a single wireless link attached to the bottleneck
server that exhibits delay variation based on another given
distribution.
We use a general distribution of rate and delay values for
the discussion in this section since we would like to capture
the inherent variation in rate and delay that is a characteristic
of the 3G wireless data environment. Given that the
wireless standards are constantly evolving, the actual rate
and delay distribution will vary from one standard or implementation
to another and is outside the scope of this paper.
Later, in Section 6, we will evaluate TCP performance over
a specific wireless link, the 3G1X-EVDO (HDR) system, using
simulation.
We would like to model TCP performance in the case of
variable rate and delay for two reasons. One, we would like
to understand the dynamics so that we can design an appropriate
mechanism to improve TCP performance. Two,
we would like to have a more accurate model that specifically
takes the burstiness caused by ack compression due to
rate/delay variability into account.
TCP performance modeling has been extensively studied
in the literature [1, 2, 9, 14, 17, 19]. Most of these models
assume constant delay and service rate at the bottleneck
router and calculate TCP throughput in terms of packet loss
probability and round trip time. In [19], the authors model
TCP performance assuming deterministic time between congestion
events [1]. In [17], the authors improve the throughput
prediction of [19] assuming exponential time between
congestion events (loss indications as Poisson). In our case,
ack compressions and link variation causes bursty losses and
the deterministic or Poisson loss models are not likely to be
as accurate. In [9], the authors model an UMTS wireless
network by extending the model from [19] and inflating the
rtt value to account for the average additional delay incurred
on the wireless link. However, we believe this will not result
in an accurate model because 1) the rtt value in [19] is already
an end-to-end measured value and 2) the loss process
is much more bursty than the deterministic loss assumption
in [19]. In [2], the authors observe that mean values are
not sufficient to predict the throughput when routers have
varying bandwidth and show that increasing variance for the
same mean service rate decreases TCP throughput. However
, the approach is numerical, and provides little intuition
in the case of delay variance.
Our approach starts with the model in [14] which describes
how TCP functions in an "ideal" environment with
constant round trip time, constant service rate and suffers
loss only through buffer overflow. A brief summary of the
result from [14] is presented here before we proceed to our
model, which can be seen as an extension. We chose to extend
the model in [14] since it makes no assumption about
the nature of loss event process (which is highly bursty in
our case) and explicitly accounts for link delay and service
rate (which are variable in our case). For simplicity, we will
only discuss the analysis of TCP Reno. TCP Sack can be
analyzed similarly. We also assume that the sender is not
limited by the maximum receiver window; simple modifications
can be made to the analysis for handling this case.
Figure 4(a) shows how the TCP congestion window varies
0
5
10
15
20
25
30
35
40
45
0
50
100
150
200
Ideal TCP
0
5
10
15
20
25
30
35
40
45
50
0
50
100
150
200
TCP with Variable Delay
(a)
Constant delay
(b) Variable delay
Figure 4:
TCP Congestion Window Evolution over time
in a constant rate and delay setting. The initial phase where
TCP tries to probe for available bandwidth is the Slow Start
phase.
After slow start, TCP goes to Congestion avoidance
phase. In the case of long-lived TCP flow, one can
focus only on the congestion avoidance phase. Let be the
constant service rate, the constant propagation delay, T
the minimum round trip time ( + 1/) and B the buffer
size.
The congestion window follows a regular saw-tooth
pattern, going from W
0
to W
max
, where W
0
= W
max
/2 and
W
max
= + B + 1. Due to the regularity of each of the
saw-tooth, consider one such saw-tooth.
Within a single
saw-tooth, the congestion avoidance phase is divided into
two epochs. In the first epoch, say epoch A, the congestion
window increases from W
0
to T , in time t
A
with number
of packets sent n
A
. In the second epoch, say epoch B, the
congestion window increases from T to W
max
, in time t
B
with number of packets sent n
B
. TCP throughput (ignoring
slow start) is simply given by (n
A
+ n
B
)/(t
A
+ t
B
) where
t
A
=
T (T
- W
0
)
(1)
n
A
=
(W
0
t
A
+ t
2
A
/(2T ))/T
(2)
t
B
=
(W
2
max
- (T )
2
)/(2)
(3)
n
B
=
t
B
(4)
This model, while very accurate for constant and T ,
breaks down when the constant propagation and service rate
assumptions are not valid. Figure 4(b) shows how the congestion
window becomes much more irregular when there
is substantial variation in the wireless link delay. This is
because the delay variation and ack compression result in
multiple packet losses.
There are three main differences in the TCP congestion
window behavior under variable rate/delay from the traditional
saw-tooth behavior.
First, while the traditional
saw-tooth behavior always results in one packet loss due
to buffer overflow, we have possibilities for multiple packet
losses due to link variation. To account for this, we augment
our model with parameters p1, p2, p3 representing respectively
the conditional probability of a single packet loss,
double packet loss, and three or more packet losses. Note
that, p1 + p2 + p3 = 1 by this definition. Second, while
the loss in the traditional saw-tooth model always occurs
when window size reaches W
max
= + B + 1, in our model
losses can occur at different values of window size, since
and are now both variables instead of constants. We capture
this by a parameter W
f
=
N
i=1
W
2
max
i
/N , that is the
square root of the second moment of the W
max
values of each
cycle. The reason we do this instead of obtaining a simple
mean of W
max
values is because throughput is related to
W
f
quadratically (since it is the area under the curve in the
74
0
5
10
15
20
25
128
130
132
134
136
138
140
142
cwnd (packets)
Time (s)
0
5
10
15
20
25
100
105
110
115
cwnd (packets)
Time (s)
(a)
Two packet loss
(b) Three packet loss
Figure 5:
Congestion Window with multiple losses
congestion window graph). Third, due to the fact that we
have multiple packet losses in our model, we need to consider
timeouts and slow starts in our throughput calculation. We
represent the timeout duration by the T
0
parameter which
represents the average timeout value, similar to the timeout
parameter in [19].
We now model the highly variable congestion window behavior
of a TCP source under rate/delay variation. We first
use W
f
instead of W
max
. We approximate (the propagation
delay) by ^
, the average link delay in the presence
of delay variability. We replace (the service rate) by ^
,
the average service rate in the presence of rate variability.
Thus, T becomes ^
T = (^
+ 1 / ^
). Now consider three
different congestion window patterns: with probability p1,
single loss followed by congestion avoidance, with probability
p2, double loss followed by congestion avoidance, and
with probability p3, triple loss and timeout followed by slow
start and congestion avoidance
2
.
First, consider the single loss event in the congestion avoidance
phase. This is the classic saw-tooth pattern with two
epochs as identified in [14].
Lets call these A1 and B1
epochs. In epoch A1, window size grows from W
01
to ^
^
T
in time, t
A1
, with number of packets transmitted, n
A1
. In
epoch B1, window size grows from ^
^
T to W
f
in time, t
B1
,
with number of packets transmitted, n
B1
. Thus, with probability
p1, n
A1
+n
B1
packets are transmitted in time t
A1
+t
B1
where
W
01
=
(int)W
f
/2
(5)
t
A1
=
^
T (^
^
T
- W
01
)
(6)
n
A1
=
(W
01
t
A1
+ t
2
A1
/(2 ^
T ))/ ^
T
(7)
t
B1
=
(W
2
f
- (^ ^
T )
2
)/(2^
)
(8)
n
B1
=
^
t
B1
(9)
Next, consider the two loss event. An example of this
event is shown in Figure 5(a). The trace is obtained using
ns-2 simulation described in Section 6.
In this case,
after the first fast retransmit (around 130s), the source receives
another set of duplicate acks to trigger the second
fast retransmit (around 131s). This fixes the two losses and
the congestion window starts growing from W
02
. The second
retransmit is triggered by the new set of duplicate acks
in response to the first retransmission. Thus, the duration
between the first and second fast retransmit is the time re-quired
for the first retransmission to reach the receiver (with
a full buffer) plus the time for the duplicate ack to return
2
We assume that three or more packet losses result in a
timeout; this is almost always true if the source is TCP
reno.
to the sender. In other words, this duration can be approximated
by the average link delay with a full buffer, ^
T +
B/^
=t
R
. We have three epochs now, epoch t
R
(time 130-131s
)with one retransmission and zero new packet, epoch
A2 (131-137s) with window size growing from W
02
to ^
^
T
in time, t
A2
, with number of packets transmitted, n
A2
, and
epoch B1 (137-143s) as before. Thus, with probability p2,
n
A2
+n
B1
packets are transmitted in time t
R
+t
A2
+t
B1
where
W
02
=
(int)W
01
/2
(10)
t
R
=
^
T + B/^
(11)
t
A2
=
^
T (^
^
T
- W
02
)
(12)
n
A2
=
(W
02
t
A2
+ t
2
A2
/(2 ^
T ))/ ^
T
(13)
Finally, consider the three loss event. An example of this
event is shown in Figure 5(b). In this case, after the first fast
retransmit, we receive another set of duplicate acks to trigger
the second fast retransmit. This does not fixthe three
losses and TCP times out. Thus, we now have five epochs:
first is the retransmission epoch (100-101s) with time t
R
and
zero new packet, second is the timeout epoch (101-103s) with
time T
0
and zero new packet, third is the slow start epoch
(103-106s) where the window grows exponentially up to previous
ssthresh value of W
03
in time t
ss
(Eqn. 15) with number
of packets transmitted n
ss
(Eqn. 16)
3
, fourth is epoch A3
(106-111s) where the window size grows from W
03
to ^
^
T in
time t
A3
(Eqn. 17) with number of packets transmitted n
A3
(Eqn. 18), and fifth is epoch B1 (111-118s) as before. Thus,
with probability p3, n
ss
+n
A3
+n
B1
packets are transmitted
in time t
R
+T
0
+t
ss
+t
A2
+t
B1
where
W
03
=
(int)W
02
/2
(14)
t
ss
=
^
T log
2
(W
03
)
(15)
n
ss
=
W
03
/ ^
T
(16)
t
A3
=
^
T (^
^
T
- W
03
)
(17)
n
A3
=
(W
03
t
A3
+ t
2
A3
/(2 ^
T ))/ ^
T
(18)
Given that the different types of packet loss events are independent
and using p1+p2+p3=1, the average TCP throughput
can now be approximated by a weighted combination of
the three types of loss events to be
p3
(n
ss
+ n
A3
) + p2
n
A2
+ p1
n
A1
+ n
B1
p3
(t
R
+ T
0
+ t
ss
+ t
A3
) + p2
(t
R
+ t
A2
) + p1
t
A1
+ t
B1
(19)
If any of t
are less than 0, those respective epochs do not
occur and we can use the above equation while setting the
respective n
, t
to zero.
In this paper, we infer parameters such as p1, p2, p3, W
f
,
and T
0
from the traces. Models such as [19] also infer the
loss probability, round trip time, and timeout durations from
traces.
Table 1 lists the various parameters used by the different
models for simulations with rate and delay variability. We
use a packet size of 1000 bytes, a buffer of 10 which represents
the product of the average bandwidth times average
delay and we ensure that the source is not window limited.
T D and T O denote the number of loss events that are of
the triple duplicate and timeout type respectively and these
values are used by models in [19] and [17]. The simulation
3
using analysis similar to [14] and assuming adequate buffer
so that there is no loss in slow start.
75
Item
Rate(Kb/s)
Delay(ms)
pkts
TD
TO
T
0
rtt
p1
p2
W
f
^
T
^
1
200
400
89713
401
1
1.76
616.2
0.998
0.000
22.00
440
25.0
2
200
380+e(20)
83426
498
1
1.71
579.3
0.639
0.357
21.38
442
25.0
3
200
350+e(50)
78827
489
12
1.79
595.8
0.599
0.367
21.24
461
25.0
4
200
300+e(100)
58348
496
114
1.92
606.0
0.339
0.279
18.95
517
25.0
5
u(200,20)
400
82180
504
1
1.75
578.1
0.535
0.460
21.61
400
24.74
6
u(200,50)
400
74840
517
29
1.80
579.9
0.510
0.403
20.52
400
23.34
7
u(200,75)
400
62674
516
81
1.86
585.9
0.398
0.348
19.05
400
20.93
8
u(200,50)
350+e(50)
70489
507
43
1.81
595.7
0.496
0.377
20.15
459
23.34
9
u(200,75)
300+e(100)
53357
497
93
2.03
635.7
0.404
0.298
17.78
511
20.93
Table 1:
Simulation and Model parameters
Item
Simulator Goodput
Model 1 [19] (accu.)
Model 2 [17] (accu.)
Model 3[Eqn. 19] (accu.)
1
199.8
228.5(0.86)
201.9(0.99)
199.8(1.0)
2
185.4
208.0(0.88)
186.0(1.0)
186.0(1.0)
3
175.1
195.5(0.88)
177.2(0.99)
180.9(0.97)
4
129.4
145.3(0.88)
153.7(0.81)
137.0(0.94)
5
182.5
205.2(0.88)
184.6(0.99)
181.3(0.99)
6
166.2
186.0(0.88)
174.6(0.95)
165.2(0.99)
7
139.2
158.4(0.86)
163.4(0.83)
137.2(0.99)
8
156.5
174.6(0.88)
166.5(0.94)
160.2(0.97)
9
118.4
134.0(0.87)
142.6(0.80)
125.0(0.94)
Table 2:
Simulation and Model throughput values
is run for 3600 seconds. We simulate delay and rate variability
with exponential and uniform distributions respectively
(u(a, b) in the table represents uniform distribution
with mean a and standard deviation b while e(a) represents
an exponential distribution with mean a). The details of the
simulation are presented in Section 6.
Table 2 compares the throughput of simulation of different
distributions for rate and delay variability at the server
and the throughput predicted by the exact equation of the
model in [19], the Poisson model in [17] and by equation 19.
The accuracy of the prediction, defined as 1 minus the ratio
of the difference between the model and simulation throughput
value over the simulation throughput value, is listed in
the parenthesis. As the last column shows, the match between
our model and simulation is extremely accurate when
the delay/rate variation is small and the match is still well
over 90% even when the variation is large. The Poisson loss
model used in [17] performs very well when the variability
is low but, understandably, does not predict well when
variability increases. The deterministic loss model seems to
consistently overestimate the throughput.
From Table 1, one can clearly see the impact of delay and
rate variability. As the variability increases, the probability
of double loss, p2, and three or more losses, p3=(1-p2-p1),
start increasing while the goodput of the TCP flow starts
decreasing. For example, comparing case 1 to case 4, p1
decreases from 0.998 to 0.339 while p3 increases. Increases
in p2 and p3 come about because when the product ^
T ^
decreases, a pipe that used to accommodate more packets
suddenly becomes smaller causing additional packet losses.
Given that n
A1
/t
A1
> n
A2
/(t
R
+ t
A2
) > (n
ss
+ n
A3
)/(t
R
+
T
0
+t
ss
+t
A3
), any solution that improves TCP performance
must reduce the occurrence of multiple packet losses, p2 and
p3. We present a solution that tries to achieve this in the
next section.
ACK REGULATOR
In this section, we present our network-based solution
for improving TCP performance in the presence of varying
bandwidth and delay. The solution is designed for improving
the performance of TCP flows towards the mobile host
(for downloading-type applications) since links like HDR are
designed for such applications. The solution is implemented
at the wireless edge, specifically at the RNC, at the layer
just above RLP/RLC. Note that, in order to implement the
standard-based RLP/RLC, the RNC already needs to maintain
a per-user queue. Our solution requires a per-TCP-flow
queue, which should not result in significant additional overhead
given the low bandwidth nature of the wireless environment
. We also assume that the data and ack packets go
through the same RNC; this is true in the case of 3G networks
where the TCP flow is anchored at the RNC because
of the presence of soft handoff and RLP.
We desire a solution that is simple to implement and remains
robust across different implementations of TCP. To
this end, we focus only on the congestion avoidance phase
of TCP and aim to achieve the classic saw-tooth congestion
window behavior even in the presence of varying rates and
delays by controlling the buffer overflow process in the bottleneck
link. We also assume for this discussion that every
packet is acknowledged (the discussion can be easily modified
to account for delayed acks where single ack packets
acknowledge multiple data packets).
Our solution is called the Ack Regulator since it regulates
the flow of acks back to the TCP source. The intuition behind
the regulation algorithm is to avoid any buffer overflow
loss until the congestion window at the TCP source reaches
a pre-determined threshold and beyond that, allow only a
single buffer overflow loss. This ensures that the TCP source
operates mainly in the congestion avoidance phase with congestion
window exhibiting the classic saw-tooth behavior.
Before we present our solution, we describe two variables
that will aid in the presentation of our solution.
ConservativeMode: Mode of operation during which
76
DataSeqNoLast (DL): Largest Sequence # of Last Data Packet Received
DataSeqNoFirst (DF): Largest Sequence # of Last Data Packet Sent
AckSeqNoLast (AL): Largest Sequence # of Last Ack Packet Received
AckSeqNoFirst (AF): Largest Sequence # of Last Ack Packet Sent
DL
DF
AF
AL
Per-Flow Data and Ack Queue on RNC
Wireless
Network
Wireline
Network
Data Queue
Ack Queue
QueueLength
QueueLim
Figure 6:
Ack Regulator Implementation
each time an ack is sent back towards the TCP source,
there is buffer space for at least two data packets from
the source.
Note that if TCP operates in the congestion avoidance
phase, there would be no buffer overflow loss as long as the
algorithm operates in conservative mode. This follows from
the fact that, during congestion avoidance phase, TCP increases
its window size by at most one on reception of an
ack. This implies that on reception of an ack, TCP source
sends either one packet (no window increase) or two packets
(window increase). Therefore, if there is space for at least
two packets in the buffer at the time of an ack being sent
back, there can be no packet loss.
AckReleaseCount: The sum of total number of acks
sent back towards the source and the total number of
data packets from the source in transit towards the
RNC due to previous acks released, assuming TCP
source window is constant.
AckReleaseCount represents the number of packets that
can be expected to arrive in the buffer at the RNC assuming
that the source window size remains constant. Thus, buffer
space equal to AckReleaseCount must be reserved whenever
a new ack is sent back to the source if buffer overflow is to
be avoided.
On Enque of Ack/Deque of data packet:
1. AcksSent=0;
2. BufferAvail=QueueLim-QueueLength;
3. BufferAvail-=(AckReleaseCount+ConservativeMode);
4. if (BufferAvail>=1)
5.
if (AckSeqNoLast-AckSeqNoFirst<BufferAvail)
5.1
AcksSent+=(AckSeqNoLast-AckSeqNoFirst);
5.2
AckSeqNoFirst=AckSeqNoLast;
else
5.3
AckSeqNoFirst+=BufferAvail;
5.4
AcksSent+=BufferAvail;
5.5
Send acks up to AckSeqNoFirst;
Figure 7: Ack Regulator processing at the RNC
Figure 6 shows the data and ack flow and the queue variables
involved in the Ack Regulator algorithm, which is presented
in Figure 7. We assume for now that the AckReleaseCount
and ConservativeMode variables are as defined
earlier. We later discuss how these variables are updated.
The Ack Regulator algorithm runs on every transmission of
a data packet (deque) and every arrival of an ack packet
(enque). The instantaneous buffer availability in the data
queue is maintained by the BufferAvail variable (line 2).
BufferAvail is then reduced by the AckReleaseCount and
the ConservativeMode variables (line 3).
Depending on the value of the ConservativeMode variable
(1 or 0), the algorithm operates in two modes, a conservative
mode or a non-conservative, respectively. In the conservative
mode, an extra buffer space is reserved in the data
queue to ensure that there is no loss even if TCP congestion
window is increased by 1, while, in the non-conservative
mode, a single packet loss occurs if TCP increases its congestion
window by 1. Now, after taking AckReleaseCount and
ConservativeMode variables into account, if there is at least
one buffer space available (line 4) and, if the number of acks
present in the ack queue (AckSeqNoLast - AckSeqNoFirst) is
lesser than BufferAvail, all those acks are sent to the source
(lines 5.1,5.2); otherwise only BufferAvail number of acks
are sent to the source (lines 5.3,5.4).
Note that the actual transmission of acks (line 5.5) is not
presented here. The transmission of AcksSent acks can be
performed one ack at a time or acks can be bunched together
due to the cumulative nature of TCP acks. However, care
must be taken to preserve the duplicate acks since the TCP
source relies on the number of duplicate acks to adjust its
congestion window. Also, whenever three or more duplicate
acks are sent back, it is important that one extra buffer space
be reserved to account for the fast retransmission algorithm.
Additional buffer reservations of two packets to account for
the Limited Transmit algorithm [12] can also be provided
for, if necessary.
1. Initialize ConservativeMode=1; = 2
2. On Enque of ack packet:
if ((DataSeqNoLast-AckSeqNoFirst)>*QueueLim)
ConservativeMode=0;
3. On Enque and Drop of data packet:
Conservative Mode=1;
4. On Enque/Deque of data packet:
if (((DataSeqNoLast-AckSeqNoFirst)<*QueueLim/2)
OR (DataQueueLength==0))
ConservativeMode=1;
Figure 8: ConservativeMode updates
We now present the algorithm (Figure 8) for updating the
ConservativeMode variable which controls the switching of
the Ack Regulator algorithm between the conservative and
the non-conservative modes. The algorithm starts in conservative
mode (line 1). Whenever a targeted TCP window
size is reached (in this case, 2*QueueLim) , the algorithm
is switched into non-conservative mode (line 2). TCP Window
Size is approximated here by the difference between the
largest sequence number in the data queue and the sequence
number in the ack queue. This is a reasonable approximation
in our case since the wireless link is likely the bottleneck
and most (if not all) of the queuing is done at the RNC.
When operating in the non-conservative mode, no additional
buffer space is reserved. This implies that there will be single
loss the next time the TCP source increases it window
size. At the detection of the packet loss, the algorithm again
switches back to the conservative mode (line 3). This ensures
that losses are of the single loss variety as long as the
estimate of AckReleaseCount is conservative. Line 4 in the
algorithm results in a switch back into conservative mode
77
whenever the data queue length goes to zero or whenever
the TCP window size is halved. This handles the case when
TCP reacts to losses elsewhere in the network and the Ack
Regulator can go back to being conservative. Note that,
if the TCP source is ECN capable, instead of switching to
non-conservative mode, the Ack Regulator can simply mark
the ECN bit to signal the source to reduce its congestion
window, resulting in no packet loss.
1.Initialize AckReleaseCount=0;
2. On Enque of Ack/Deque of data packet:
(after processing in Fig 7)
AckReleaseCount+=AcksSent;
3. On Enque of data packet:
if (AckReleaseCount>0)
AckReleaseCount;
4. On Deque of data packet:
if (DataQueueLength==0)
AckReleaseCount=0;
Figure 9: AckReleaseCount updates
We finally present the algorithm for updating the AckReleaseCount
variable in Figure 9. Since AckReleaseCount
estimates the expected number of data packets that are arriving
and reserves buffer space for them, it is important to
get an accurate estimate. An overestimate of AckReleaseCount
would result in unnecessary reservation of buffers that
won't be occupied, while an underestimate of AckReleaseCount
can lead to buffer overflow loss(es) even in conservative
mode due to inadequate reservation.
With the knowledge of the exact version of the TCP source
and the round trip time from the RNC to the source, it is
possible to compute an exact estimate of AckReleaseCount.
However, since we would like to be agnostic to TCP version
as far as possible and also be robust against varying round
trip times on the wired network, our algorithm tries to maintain
a conservative estimate of AckReleaseCount. Whenever
we send acks back to the source, we update AckReleaseCount
by that many acks (line 2). Likewise, whenever a data
packet arrives into the RNC from the source, we decrement
the variable while ensuring that it does not go below zero
(line 3).
While maintaining a non-negative AckReleaseCount in
this manner avoids underestimation, it also can result in
unbounded growth of AckReleaseCount leading to significant
overestimation as errors accumulate. For example, we
increase AckReleaseCount whenever we send acks back to
the source; however, if TCP is reducing its window size due
to loss, we cannot expect any data packets in response to
the acks being released. Thus, over time, AckReleaseCount
can grow in an unbounded manner. In order to avoid this
scenario, we reset AckReleaseCount to zero (line 4) whenever
the data queue is empty. Thus, while this reset operation
is necessary for synchronizing the real and estimated
AckReleaseCount after a loss, it is not a conservative mechanism
in general since a AckReleaseCount of zero implies
that no buffer space is currently reserved for any incoming
data packets that are unaccounted for. However, by doing
the reset only when the data queue is empty, we significantly
reduce the chance of the unaccounted data packets
causing a buffer overflow loss. We discuss the impact of this
estimation algorithm of AckReleaseCount in Section 6.6.
Finally, we assume that there is enough buffer space for
RNC
S1
Sn
M1
Mn
V
100Mb/s
1ms
100Mb/s
1ms
L Mb/s
D ms
L Mb/s
D ms
FR Mb/s
FD ms
RR Mb/s
RD ms
Figure 10:
Simulation Topology
the ack packets in the RNC. The maximum number of ack
packets is the maximum window size achieved by the TCP
flow (*QueueLim in our algorithm). Ack packets do not
have to be buffered as is, since storing the sequence numbers
is sufficient (however, care should be taken to preserve
duplicate ack sequence numbers as is). Thus, memory requirement
for ack storage is very minimal.
SIMULATION RESULTS
In this section, we present detailed simulation results comparing
the performance of TCP Reno and TCP Sack, in the
presence and absence of the Ack Regulator. First, we study
the effect of variable bandwidth and variable delay using
different distributions on the throughput of a single long-lived
TCP flow. Next, we present a model for 3G1X-EVDO
(HDR) system (which exhibits both variable rate and variable
delay), and evaluate the performance of a single TCP
flow in the HDR environment. Then, we present the performance
of multiple TCP flows sharing a single HDR wireless
link. Finally, we briefly discuss the impact of different parameters
affecting the behavior of Ack Regulator.
All simulations are performed using ns-2. The simulation
topology used is shown in Figure 10. S
i
, i = 1..n corresponds
to the set of TCP source nodes sending packets to a set of
the TCP sink nodes M
i
, i = 1..n. Each set of S
i
, M
i
nodes
form a TCP pair. The RNC is connected to the M
i
nodes
through a V (virtual) node for simulation purposes. L, the
bandwidth between S
i
and the RNC, is set to 100Mb/s and
D is set to 1ms except in cases where D is explicitly varied.
The forward wireless channel is simulated as having rate F R
and delay F D, and the reverse wireless channel has rate RR
and delay RD.
Each simulation run lasts for 3600s (1hr) unless otherwise
specified and all simulations use packet size of 1KB. TCP
maximum window size is set to 500KB. Using such a large
window size ensures that TCP is never window limited in
all experiments except in cases where the window size is
explicitly varied.
6.1
Variable Delay
In this section, the effect of delay variation is illustrated by
varying F D, the forward link delay. Without modification,
the use of a random link delay in the simulation will result
in out-of-order packets since packet transmitted later with
lower delay can overtake packets transmitted earlier with
higher delay. However, since delay variability in our model
is caused by factors that will not result in packet reordering
(e.g. processing time variation) and RLP delivers packet in
sequence, the simulation code is modified such that packets
cannot reach the next hop until the packet transmitted earlier
has arrived. This modification applies to all simulations
with variable link delay.
Figure 11(a) shows throughput for a single TCP flow (n =
1) for F R = 200Kb/s and RR = 64Kb/s. F D has an ex-78
100
120
140
160
180
200
20
30
40
50
60
70
80
90
100
TCP Throughput (kb/s)
Delay Variance
Reno
Reno, w/AR
Sack
Sack, w/AR
100
120
140
160
180
200
2
4
6
8
10
12
14
16
18
Throughput (kb/s)
Buffer Size (packet)
Reno
Reno, w/AR
Sack
Sack, w/AR
BDP=10
(a)
Delay Variability
(b) Different Buffer Size
Figure 11:
Throughput with Variable Delay e(x)+400-x
ponential distribution with a mean that varies from 20ms
to 100ms, and RD = 400ms - mean(F D) so that average
F D+RD is maintained at 400ms. The buffer size on the
bottleneck link for each run is set to 10, the product of
the mean throughput of (200Kb/s or 25pkt/s) and mean
link delay (0.4s). This product will be referred to as the
bandwidth-delay product (BDP) in later sections.
Additional
delay distributions like uniform, normal, lognormal,
and Poisson were also experimented with. Since the results
are similar, only plots for an exponential delay distribution
are shown.
As expected, when the delay variation increases, throughput
decreases for both TCP Reno and TCP Sack. By increasing
the delay variance from 20 to 100, throughput of
TCP Reno decreases by 30% and TCP Sack decreases by
19%. On the other hand, TCP Reno and TCP Sack flows
which are Ack Regulated are much more robust and its
throughput decreases by only 8%.
Relatively to one another
, Ack Regulator performs up to 43% better than TCP
Reno and 19% better than TCP Sack. Another interesting
result is that Ack Regulator delivers the same throughput irrespective
of whether the TCP source is Reno or Sack. This
is understandable given the fact that the Ack Regulator tries
to ensure that only single buffer overflow loss occurs and in
this regime, Reno and Sack are known to behave similarly.
This property of Ack Regulator is extremely useful since for
a flow to use TCP Sack, both the sender and receiver needs
to be upgraded. Given that there are still significant number
of web servers that have not yet been upgraded to TCP
Sack [28], deployment of Ack Regulator would ensure excellent
performance irrespective of the TCP version running.
Figure 11(b) shows how throughput varies with buffer size
with the same set of parameters except for F D, which is now
fixed with a mean of 50ms (exponentially distributed). Even
with a very small buffer of 5 packets (0.5 BDP), Ack Regulator
is able to maintain a throughput of over 80% of the
maximum throughput of 200Kb/s. Thus, Ack Regulator delivers
robust throughput performance across different buffer
sizes. This property is very important in a varying rate and
delay environment of a wireless system, since it is difficult to
size the system with an optimal buffer size, given that the
BDP also varies with time. For a buffer of 4 packets, the
improvement over TCP Reno and Sack is about 50% and
24% respectively. As buffer size increases, the throughput
difference decreases. With buffer size close to 20 packets
(2 BDP), TCP Sack performs close to Ack Regulated flows,
while improvement over TCP Reno is about 4%.
Finally, in Table 3, we list parameter values from the simulation
for delay variance of 100. First, consider Reno and
Item
Rate,
TD
TO
p1
p2
p3
W
f
Kb/s
Reno
129
496
114
0.34
0.3
0.38
19
Reno+AR
184
302
8
0.98
0.0
0.02
24
Sack
160
434
4
0.99
0.0
0.01
19
Sack+AR
184
302
8
0.97
0.0
0.03
24
Table 3:
Parameters from simulation for variance=100
140
150
160
170
180
190
200
0
10
20
30
40
50
60
70
80
Throughput (kb/s)
Rate Variance
Reno
Reno, w/AR
Sack
Sack, w/AR
80
100
120
140
160
180
0
5
10
15
20
25
30
35
40
45
Throughput (kb/s)
Buffer Size (packet)
Reno, No AR
Reno, w/AR
Sack, No AR
Sack, w/ AR
BDP=9
(a)
Bandwidth Variability
(b) Different Buffer Size
Figure 12:
Throughput
with Variable Bandwidth
u(200,x)
Reno with Ack Regulator (first two rows). It is clear that
Ack Regulator is able to significantly reduce the conditional
probability of multiple losses p2 and p3 as well as absolute
number of loss events (T D and T O) resulting in substantial
gains over Reno. Next, consider Sack and Sack with
Ack Regulator (last two rows). In this case, we can see that
Sack is very effective in eliminating most of the timeout occurrences
. However, Ack Regulator is still able to reduce
the absolute number of loss events by allowing the congestion
window to grow to higher values (24 vs 19), resulting
in throughput gains.
6.2
Variable Bandwidth
In this section, we vary the link bandwidth, F R. Figure
12(a) shows throughput for a single TCP flow. F R is
uniformly distributed with a mean of 200 Kb/s and the variance
is varied from 20 to 75. F D = 200ms, RR = 64Kb/s
and RD = 200ms. The buffer size on the bottleneck link
for each run is 10. Again, we have experimented with other
bandwidth distributions, but, due to lack of space, only uniform
distribution is shown. Note that, with variable rate,
the maximum throughput achievable is different from the
mean rate. For uniform distribution, a simple closed form
formula for the throughput is simply 1/
b
a
1/xdx = 1/(ln b-ln
a) where b is the maximum rate and a is the minimum
rate.
When the rate variance increases, throughput of TCP
Reno decreases as expected. Compared to TCP Reno, Ack
Regulator improves the throughput by up to 15%. However
, TCP Sack performs very well and has almost the same
throughput as Ack Regulated flows. Based on the calculations
for maximum throughput discussed before, it can be
shown that all flows except Reno achieve maximum throughput
. This shows that if rate variation is not large enough,
TCP Sack is able to handle the variability. However, for
very large rate variations (e.g. rate with lognormal distribution
and a large variance), the performance of TCP Sack
is worse than when Ack Regulator is present.
Figure 12(b) shows how the throughput varies with buffer
size. Note that with a lower throughput, bandwidth delay
product is smaller than 10 packets. Again, Ack Regulated
79
140
145
150
155
160
165
170
175
180
185
190
6
8
10
12
14
16
18
Throughput (kb/s)
Buffer Size (packet)
Reno, No AR
Reno, w/AR
Sack, No AR
Sack, w/ AR
BDP=9
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
1.3
6
8
10
12
14
16
18
RTT (sec)
Buffer Size (packet)
Reno, No AR
Reno, w/AR
Sack, No AR
Sack, w/ AR
BDP=9
(a)
Throughput vs Buffer Size
(b) rtt vs Buffer Size
Figure 13:
Throughput and rtt for u(200,50),350+e(50)
TCP flows perform particularly well when the buffer size is
small. With buffer size of 5, the improvement over TCP
Sack is 40%.
6.3
Variable Delay and Bandwidth
In this section, we vary both the bandwidth and delay of
the wireless link. F R is uniformly distributed with a mean of
200 Kb/s and variance of 50, DR is exponentially distributed
with a mean of 50ms, RR = 64Kb/s and RD = 350ms. The
maximum achievable throughput is 186.7 Kb/s. The BDP
is therefore about 9 packets.
Figure 13(a) shows the throughput for a single TCP flow
with the buffer size ranging from 7 to 20. The combination
of variable rate and delay has a large negative impact on the
performance of TCP Reno and it is only able to achieve 70%
to 80% of the bandwidth of Ack Regulated flows when the
buffer size is 6 packets. Even with a buffer size of 18 packets,
the throughput difference is more than 5%. Throughput of
TCP Sack is about 5% to 10% lower than Ack Regulator,
until the buffer size reaches 18 packets (about 2 BDP).
One of the cost of using the Ack Regulator is the increase
in average round trip time (rtt). The average rtt values for
all 4 types of flows are shown in 13(b) for different buffer
sizes. TCP Reno has the lowest rtt followed by TCP Sack
and the rate of rtt increase with buffer size is comparable.
With Ack Regulator, rtt increase is comparable with unreg-ulated
flows for buffer size less than 9 (1 BDP). For larger
buffer sizes, since Ack Regulator uses = 2 times buffer
size to regulate the acks in conservative mode, rtt increases
faster with buffer size than regular TCP, where only the
data packet buffer size contributes to rtt. For example, with
buffer size of 9, Ack Regulated flows have a rtt 15% larger
and with buffer size of 18, the rtt is 48% larger compared
to TCP Sack. This effect can be controlled by varying the
parameter of the Ack Regulator.
6.4
Simulation with High Data Rate
High Data Rate (HDR) [6] is a Qualcomm proposed CDMA
air interface standard (3G1x-EVDO) for supporting high
speed asymmetrical data services. One of the main ideas
behind HDR is the use of channel-state based scheduling
which transmits packets to the user with the best signal-to-noise
ratio. The actual rate available to the selected user
depends on the current signal-to-noise ratio experienced by
the user. The higher the ratio, the higher the rate available
to the user. In addition, in order to provide some form
of fairness, a Proportional Fair scheduler is used which provides
long-term fairness to flows from different users. We use
Qualcomm's Proportional Fair scheduler in our simulation
with an averaging window of 1000 time slots, where each
Rate(Kb/s)
Prob.
Rate(Kb/s)
Prob.
38.4
0.033
614.4
0.172
76.8
0.015
921.6
0.145
102.6
0.043
1228.8
0.260
153.6
0.023
1843.2
0.042
204.8
0.060
2457.6
0.011
307.2
0.168
Table 4:
HDR Data Rates for a one user system
250
300
350
400
450
500
550
600
0
5
10
15
20
25
30
35
40
Throughput (Kb/s)
Buffer Size (packet)
Reno, No AR
Reno, w/AR
Sack, No AR
Sack, w/ AR
BDP=15
200
300
400
500
600
700
800
0
5
10
15
20
25
30
35
40
Average RTT (ms)
Buffer Size (Packet)
Reno
Reno, w/AR
Sack
Sack, w/AR
BDP=15
(a)
Throughput
(b) rtt
Figure 14:
Throughput/rtt with HDR, Single Flow
slot is 1.67 ms. While the HDR system results in higher raw
throughput, the rate and delay variation seen is substantial.
In this section, we model a simplified HDR environment
in ns-2, focusing on the layer 3 scheduling and packet fragmentation
. The fading model for the wireless link used is
based on Jake's Rayleigh fading channel model [25]. This
gives us the instantaneous signal-to-noise ratio. Using Table
2 in [6] which lists the rate achievable for a given signal-to-noise
ratio assuming a frame error rate of less than 1%, the
achievable bandwidth distribution (with one user) for our
simulation is shown in Table 4.
The simulation settings are as follows. F R is a variable
that has a bandwidth distribution of Table 4, due to the variations
of the fading conditions of the channel. Based on the
guidelines from [26], F D is modeled as having a uniform distribution
with mean 75ms and variance 30 and RD is modeled
as having a uniform distribution with mean 125ms and
variance 15. These are conservative estimates. We expect
delay variations in actual systems to be higher (for example,
note the ping latencies from our experiment in Section 3).
The uplink in a HDR system is circuit-based and RR is set
to be 64Kb/s.
Figure 14(a) shows how throughput for a single TCP flow
varies with buffer size. Assuming an average bandwidth of
600Kb/s and a link delay of 200ms, BDP is 15 packets.
Again, the performance of TCP Reno flows that are Ack
Regulated is significantly better than plain TCP Reno over
the range of buffer size experimented, with improvements
from 4% to 25%. TCP Sack flows also performs worse than
Ack Regulated flows up to buffer size of 20. The improvement
of Ack Regulator over TCP Sack ranges from 0.5% to
18%.
As mentioned earlier, one of the costs of using the Ack
Regulator is increase in average rtt. The average rtt for all
4 types of flows are shown in 14(b) with buffer size varying
from 5 to 40. The effect is similar to the rtt variation with
buffer size seen in Section 6.3.
6.5
Multiple TCP Flows
In this simulation, the number of flows (n) sharing the
bottleneck link is increased to 4 and 8. Per-flow buffering is
80
300
400
500
600
700
800
2
4
6
8
10
12
14
16
18
Total Throughput (Kb/s)
Per Flow Buffer Size (Packet)
4 Reno Flows
4 Reno Flows, w/AR
4 Sack Flows
4 Sack Flows, w/AR
BDP=5
450
500
550
600
650
700
750
800
850
900
950
2
4
6
8
10
12
14
16
18
Total Throughput (kb/s)
Per Flow Buffer Size (Packet)
8 Reno Flows
8 Reno Flows, w/AR
8 Sack Flows
8 Sack Flows, w/AR
BDP=3
(a)
4 TCP Flows
(b) 8 TCP Flows
Figure 15:
Throughput with HDR, Multiple Flows
provided for each TCP flow. For 4 flows, using mean rate of
200Kb/s, 1KB packet and rtt of 0.2s, BDP is 5 packets per
flow. For 8 flows, using mean rate of 120Kb/s, 1KB packet
and rtt of 0.2s, BDP is 3 packets per flow.
As the number of TCP flows increases, the expected rate
and delay variation seen by individual flows also increases.
Thus, even though the total throughout of the system increases
with more users due to channel-state based scheduling
, the improvement is reduced by the channel variability.
Figure 15(a) shows the throughput for 4 TCP flows. The
improvement of Ack Regulator over TCP Sack increases
compared to the single TCP case. For example, the gain
is 17% with per-flow buffer size of 5 (BDP). For Reno the
gain is even greater. With per-flow buffer size of 5, the improvement
is 33%. Similar result can also be observed for
the case of 8 TCP flows as shown in Figure 15(b). For both
TCP Reno and Sack, the gain is about 31% and 29% respectively
for per-flow buffer size of 3. From the figure, it
can seen that, for TCP Sack and Reno to achieve close to
maximum throughput without Ack Regulator, at least three
times the buffer requirements of Ack Regulator is necessary
(buffer requirements for acks in the Ack Regulator is negligible
compared to the 1KB packet buffer since only the
sequence number needs to be stored for the acks). This not
only increases the cost of the RNC, which needs to support
thousands of active flows, it also has the undesirable side-effects
of large rtt's that was noted in Section 3.
With multiple TCP flows, the issue of throughput fairness
naturally arises.
One way to quantify how bandwidth is
shared among flows is to use the fairness indexdescribed in
[27]. This indexis computed as the ratio of the square of the
total throughput to n times the square of the individual flow
throughput. If all flows get the same allocation, then the
fairness indexis 1. As the differences in allocation increases,
fairness decreases. A scheme which allocates bandwidth to
only a few selected users has a fairness indexnear 0.
Computation of this indexis performed for all multiple
flows simulation and the indexis greater than 0.99 in all
cases. This result is expected since with per flow buffering,
and proportional fair scheduling, the long term throughput
of many TCP long-lived flows sharing the same link should
be fair.
6.6
Parameters affecting the performance of
Ack Regulator
Due to lack of space, we will only briefly present the results
of varying parameters such as wired network latency and .
As the network latency is varied from 20ms to 100ms,
throughput decreases by 1.63% and 2.62% for Reno and
Reno with Ack Regulator flows, respectively. Most of the
decrease can be accounted for by the impact of increase
in latency on TCP throughput. The result shows that the
AckReleaseCount estimation algorithm is effective and hence
the Ack Regulator is able to reserve the appropriate amount
of buffer for expected packet arrivals even with substantial
wireline delay.
In another experiment, the parameter in an Ack Regulated
TCP flow is varied from 1 to 4. When is increased
from 1 to 3, the TCP flow is able to achieve its maximum
throughput at a smaller buffer size.
As increases, the
rtt also increases and when is increased to 4, throughput
decreases for larger buffer sizes (> 15). The decrease
in throughput is caused by the accumulation of sufficiently
large amount of duplicate acks that are sent to the TCP
sender.
A value of = 2 appears to be a good choice,
balancing throughput and rtt for reasonable buffer sizes.
6.7
Summary of Results and Discussion
In this section, we first summarize the results from the
simulation experiments and then briefly touch upon other
issues.
We first started with experiments using a wireless link
with variable delay. We showed that Ack Regulator delivers
performance up to 43% better than TCP Reno and 19%
better than TCP Sack when the buffer size was set to one
BDP. We then examined the impact of a wireless link with
variable rate. We saw that when the rate variance increases,
throughput of TCP Reno decreases as expected. Compared
to TCP Reno, Ack Regulator improves the throughput by
up to 15%. However, TCP Sack performs very well and has
almost the same throughput as Ack Regulated flows as long
as the rate variation is not extremely large.
We next considered the impact of a wireless link with
variable delay and variable rate. We found that this combination
had a large negative impact on the performance of
both TCP Reno and Sack (up to 22% and 10% improvement
respectively for Ack Regulated flows). We then considered
a specific wireless link standard called HDR which exhibits
both variable delay and variable rate. The results were as
expected, with Ack Regulator improving TCP Reno performance
by 5% to 33% and TCP Sack by 0.5% to 24%. We
then evaluated the impact of multiple TCP flows sharing
the HDR link.
The gains of Ack Regulator over normal
TCP flows were even greater in this case (with 32% to 36%
improvements) when the buffer size is set to one BDP.
In general, we showed that Ack Regulator delivers the
same high throughput irrespective of whether the TCP flow
is Reno or Sack. We further showed that Ack Regulator delivers
robust throughput performance across different buffer
sizes with the performance improvement of Ack Regulator
increasing as buffer size is reduced.
We only considered TCP flows towards the mobile host
(for downloading-type applications) since links like HDR are
designed for such applications. In the case of TCP flows in
the other direction (from the mobile host), Ack Regulator
can be implemented, if necessary, at the mobile host to optimize
the use of buffer on the wireless interface card.
Finally, Ack Regulator cannot be used if the flow uses
end-to-end IPSEC. This is also true for all performance enhancing
proxies. However, we believe that proxies for performance
improvement are critical in current wireless networks.
In order to allow for these proxies without compromising
security, a split security model can be adopted where the
81
RNC, under the control of the network provider, becomes a
trusted element. In this model, a VPN approach to security
(say, using IPSEC) is used on the wireline network between
the RNC and the correspondent host and 3G authentication
and link-layer encryption mechanisms are used between the
RNC and mobile host. This allows the RNC to support
proxies such as the Ack Regulator to improve performance
without compromising security.
CONCLUSION
In this paper, we comprehensively evaluated the impact
of variable rate and variable delay on TCP performance.
We first proposed a model to explain and predict TCP's
throughput over a link with variable rate and delay. Our
model was able to accurately (better than 90%) predict
throughput of TCP flows even in the case of large delay
and rate variation. Based on our TCP model, we proposed
a network based solution called Ack Regulator to mitigate
the effect of rate and delay variability. The performance of
Ack Regulator was evaluated extensively using both general
models for rate and delay variability as well as a simplified
model of a 3
rd
Generation high speed wireless data air interface
. Ack Regulator was able to improve the performance
of TCP Reno and TCP Sack by up to 40% without significantly
increasingly the round trip time. We also showed
that Ack Regulator delivers the same high throughput irrespective
of whether the TCP source is Reno or Sack. Furthermore
, Ack Regulator also delivered robust throughput
performance across different buffer sizes. Given the difficulties
in knowing in advance the achievable throughput and
delay (and hence the correct BDP value), a scheme, like
Ack Regulator, which works well for both large and small
buffers is essential. In summary, Ack Regulator is an effective
network-based solution that significantly improves TCP
performance over wireless links with variable rate and delay.
Acknowledgements
The authors would like to thank Lijun Qian for providing
the fading code used in the HDR simulation, Clement Lee
and Girish Chandranmenon for providing the 3G1xtrace
and Sandy Thuel for comments on earlier versions of this
paper.
REFERENCES
[1] E. Altman, K. Avrachenkov and C. Barakat, "A
Stochastic Model of TCP/IP with Stationary
Random Loss," in Proceedings of SIGCOMM 2000.
[2] F. Baccelli and D. Hong,"TCP is Max-Plus Linear,"
in Proceedings of SIGCOMM 2000.
[3] A. Bakre and B.R. Badrinath, "Handoff and System
Support for Indirect TCP/IP," in proceedings of
Second UsenixSymposium on Mobile and
Location-Independent Computing, Apr 1995.
[4] H. Balakrishnan et al., "Improving TCP/IP
Performance over Wireless Networks," in proceedings
of ACM Mobicom, Nov 1995.
[5] H. Balakrishnan, V.N. Padmanabhan, R.H. Katz,
"The Effects of Asymmetry on TCP Performance,"
Proc. ACM/IEEE Mobicom, Sep. 1997.
[6] P. Bender et al., "A Bandwidth Efficient High Speed
Wireless Data Service for Nomadic Users," in IEEE
Communications Magazine, Jul 2000.
[7] P. Bhagwat at al, "Enhancing Throughput over
Wireless LANs Using Channel State Dependent
Packet Scheduling," in Proc. IEEE INFOCOM'96.
[8] K. Brown and S.Singh, "M-TCP: TCP for Mobile
Cellular Networks," ACM Computer
Communications Review Vol. 27(5), 1997.
[9] A. Canton and T. Chahed, "End-to-end reliability in
UMTS: TCP over ARQ," in proceedings of
Globecomm 2001.
[10] TIA/EIA/cdma2000, "Mobile Station - Base Station
Compatibility Standard for Dual-Mode Wideband
Spread Spectrum Cellular Systems", Washington:
Telecommunication Industry Association, 1999.
[11] G. Holland and N. H. Vaidya, "Analysis of TCP
Performance over Mobile Ad Hoc Networks," in
Proceedings of ACM Mobicom'99.
[12] H. Inamura et al., "TCP over 2.5G and 3G Wireless
Networks," draft-ietf-pilc-2.5g3g-07, Aug. 2002.
[13] F. Khafizov and M. Yavuz, "TCP over CDMA2000
networks," Internet Draft,
draft-khafizov-pilc-cdma2000-00.txt.
[14] T. V. Lakshman and U. Madhow, "The Performance
of Networks with High Bandwidth-delay Products
and Random Loss," in IEEE/ACM Transactions on
Networking, Jun. 1997.
[15] R. Ludwig et al., "Multi-layer Tracing of TCP over a
Reliable Wireless Link," in Proceedings of ACM
SIGMETRICS 1999.
[16] Reiner Ludwig and Randy H. Katz "The Eifel
Algorithm: Making TCP Robust Against Spurious
Retransmissions," in ACM Computer
Communications Review, Vol. 30, No. 1, 2000.
[17] V. Misra, W. Gong and D. Towsley, "Stochastic
Differential Equation Modeling and Analysis of TCP
Windowsize Behavior," in Proceedings of
Performance'99.
[18] P. Narvaez and K.-Y. Siu, "New Techniques for
Regulating TCP Flow over Heterogeneous
Networks," in LCN'98.
[19] "Modeling TCP Throughput: a Simple Model and its
Empirical Validation," in Proceedings of SIGCOMM
1998.
[20] S. Paul et al., "An Asymmetric Link-Layer Protocol
for Digital Cellular Communications," in proceedings
of INFOCOM 1995.
[21] Third Generation Partnership Project, "RLC
Protocol Specification (3G TS 25.322:)", 1999.
[22] TIA/EIA/IS-707-A-2.10, "Data Service Options for
Spread Spectrum Systems: Radio Link Protocol
Type 3", January 2000.
[23] S. Karandikar et al., "TCP rate control," in ACM
Computer Communication Review, Jan 2000.
[24] 3G Partnership Project, Release 99.
[25] "Microwave mobile communications," edited by W.
C. Jakes, Wiley, 1974.
[26] "Delays in the HDR System," QUALCOMM, Jun.
2000.
[27] R. Jain, "The Art of Computer Systems Performance
Analysis," Wiley, 1991.
[28] J. Padhye and S. Floyd, "On Inferring TCP
Behavior," in Proceedings of SIGCOMM'2001.
82
| algorithm;architecture;TCP;wireless communication;performance evaluation;3G wireless links;prediction model;design;Link and Rate Variation;3G Wireless;simulation result;congestion solution;Network |
188 | The Forest and the Trees: Using Oracle and SQL Server Together to Teach ANSI-Standard SQL | Students in a sophomore-level database fundamentals course were taught SQL and database concepts using both Oracle and SQL Server. Previous offerings of the class had used one or the other database. Classroom experiences suggest that students were able to handle learning SQL in the dual environment, and, in fact, benefited from this approach by better understanding ANSI-standard versus database-specific SQL and implementation differences in the two database systems. | INTRODUCTION
A problem arises in many technology classes. The instructor
wishes to teach principles and concepts of a technology. To give
students hands-on experience putting those theories to work, a
specific product that implements that technology is selected for a
lab component. Suddenly the students are learning more about the
specific product than they are about the technology concepts.
They may or may not realize what is specific to that product and
what is general to the technology. Students may even start
referring to the course as a VB course, a PHP course, or an Oracle
course when what you wanted to teach was programming, web
scripting, or database principles.
This paper presents the experiences from a database fundamentals
course that used both Oracle and SQL Server so that students
would better understand ANSI-standard SQL. Though each
database is ANSI SQL compliant, there are definite differences in
implementation (Gorman, 2001; Gulutzan, 2002). By learning
each implementation and how each departs from ANSI-standard
SQL, students can be better prepared to work with any database
and better understand general concepts of databases and SQL. The
paper discusses the observed results from this approach and how
well the approach met learning objectives.
COURSE CONTEXT AND LEARNING OBJECTIVES
CPT 272, Database Fundamentals, is a sophomore-level database
programming and design class taught primarily to computer
technology majors in their fourth semester. Students will have
previously taken a freshman-level course that introduces them to
databases as a tool for learning general information system
development terms and concepts. The freshman-level course uses
Microsoft Access because it is easy to use for quickly developing
a small personal information system. That course also introduces
both SQL and Query By Example methods for querying a
database as well basic database design concepts, which are
applied for simple data models.
Students then move into two programming courses, the second of
which uses single-table SQL statements for providing data to
(formerly) Visual Basic or (currently) web-programming
applications. So by the time the students take the Database
Fundamentals course they have a concept of what a database is
and how it is used as the back-end for programming. The
Database Fundamentals course is the course where students learn
SQL in depth, database concepts, and basic database design. It
does not teach stored procedure programming, triggers, or
enterprise or distributed database design, which are covered in
more advanced courses.
The learning objectives for the Database Fundamentals course
are:
To understand the fundamentals of a relational database.
To understand the fundamentals of client-server and multi-tiered
applications.
To understand the principles and characteristics of good
relational database design.
To design entity relationship models for a business problem
domain verified by the rules of normalization (through third
normalized form).
To build simple to moderately complex data models.
To write simple to moderately complex SQL to query a
multiple-table database.
To write data manipulation language (DML) SQL to insert
rows, delete rows, and update rows.
To understand the concept of database transactions and
demonstrate the proper use of commits and rollbacks.
To write data definition language (DDL) SQL to create and
drop tables, indexes, and constraints.
To understand and be able to implement the fundamentals of
security and permissions in a database.
To explain the benefits of using views and write SQL
statements to create views.
To create and use SQL scripts and use SQL to build scripts.
To gain a working knowledge of query optimization,
performance tuning, and database administration.
To apply team skills to build a client-server database
application.
CONSIDERATIONS FOR CROSS-ENGINE SQL EDUCATION
CPT 272, Database Fundamentals, is taught in a multi-campus
university. It was initially taught on the main campus using
Oracle. When the course was rolled out to the regional campuses,
SQL Server was first used because of administration
considerations involved with Oracle. Now WAN connections
have been established that allow the use of either database engine
or both.
During the spring 2003 semester one regional campus
experimented with the use of both databases. The reasons for
doing this were:
To accomplish the course learning objectives in SQL
necessitates going beyond ANSI-standard SQL into
database-specific functions, sub-queries, and other aspects of
SQL that are implemented differently in different databases.
If the students learn only Oracle or SQL Server (or any other
database) they are likely to confuse ANSI-standard SQL with
the database-specific implementation, which can hinder them
when they enter the job market. By using both databases, it
was hoped that students would learn and understand the
differences among ANSI-standard SQL, Oracle SQL, SQL
Server T-SQL.
Some design considerations, such as Identities/Sequences
and datatypes, are implemented differently in different
databases. Again, students will enter the job market with a
stronger understanding if they understand the difference
between the concept and how it is implemented.
Neither Oracle nor SQL Server commands a majority of
market share. However, the two together make up about fifty
percent of the current market share, positioning students well
for the job market (Wong, 2002).
Studying two databases together opens the door for
discussing the pros and cons of these and other databases,
including DB2, MySQL, and Sybase.
Finally, students often want to install a database engine on
their personal computer and work on lab assignments at
home. Both Oracle and SQL Server have licensing and
hardware requirement issues that on any given computer may
preclude one or the other. Using both allowed most students
to do at least some of their work at home.
To implement a cross-engine approach, an SQL text would be
needed that taught both databases. A special textbook was created
by two of the instructors, a draft of which was used in CD format.
In addition to covering SQL essentials in Oracle and SQL Server,
it also covered Microsoft Access and MySQL in hopes that it
might also be used in the freshman course and be a good reference
for real world web programming. The text has since been picked
up by a publisher.
COURSE DESIGN AND ASSIGNMENTS
CPT 272, Database Fundamentals, consists of a both a lecture and
a lab component. The lectures cover fundamental database
concepts, including SQL concepts, query optimization, and
database design and normalization. The lab component focuses on
mastery of SQL. Table 1 shows the labs that were assigned and
the database used for each.
Table 1. Course Lab Schedule
Lab
Lab DBMS
Single Table Select
Oracle
Aggregates & Sub Queries
SQL Server
Joining Tables
Both Oracle & SQL Server
DBMS Specific Functions
Both Oracle & SQL Server
Advanced Queries
Oracle
Data Manipulation
Student Choice
Database Definition
Student Choice
Privileges Student
Choice
The first three labs concentrated as much as possible on ANSI-standard
SQL. Of course, implementation of sub queries and joins
is in some cases different between Oracle and SQL Server. These
differences were taught and discussed. However, those labs
avoided all DBMS specific functionality, such as concatenation,
date manipulation, and datatype conversion. These things were
covered in the DBMS Specific Functions lab. This approach
limited some of what could be done in the first labs but provided a
solid distinction between what was ANSI-standard SQL and what
was database-specific. Later labs used these database-specific
functions in various lab exercises so that these functions, which
are crucial in the real world, were mastered.
In addition to the use of both databases in labs, lecture material
constantly referred to how various design, administration, and
optimization concepts would be applied in both databases and in
other databases, such as MySQL. In addition, exam questions
asked students to compare the capabilities of each database. Other
235
assignments led students into an exploration of the pros and cons
of various database engines. Two of these were for students to
write short papers on the following:
1. Research one of the following databases: DB2, Sybase,
Informix, MySQL, PostgreSQL, or SQLWindows Solo. Write
a 2-3 page paper comparing it to Oracle and SQL Server.
Include your recommendations regarding the circumstances in
which this database should be used.
2. Research what people on the Internet say comparing SQL
Server and Oracle. Based on their perspectives and your own
experience with these two databases, write 1-2 pages
comparing them.
DISCUSSION
With all the course objectives listed above, the Database
Fundamentals course makes for a full semester. Using two
databases instead of one definitely adds to the challenge of fitting
in all the material. This first attempt led to the realization of
several "kinks" that would need to be worked out before it could
be attempted again. These are listed below.
In most cases any given SQL lecture had barely enough time
for covering the SQL concepts and implementation in both
databases, forcing the instructor to scrimp on in-class
examples. This meant that students had a more shaky
foundation going into the lab assignment. However, it should
be noted that students did about as well on lab assignments
as prior classes. One possible solution would be to focus
each SQL lecture on only one database, but to revisit that
concept in the following lecture when the other database is
used.
In past semesters using only one database, it was possible to
include a discussion of datatypes along with the DDL
lecture. Using two databases the DDL lecture had to be
expanded, which did not leave enough time for thorough
coverage of datatypes in both databases. A solution would be
to move the datatype material to one of the design lectures,
emphasizing datatyping as a design step.
By switching between databases, database-specific syntax
never clicked in students' minds. They seemed less able than
prior classes to apply concatenation and datatype conversions
without looking up the syntax in reference material. While
this is regrettable, it may be a worthwhile trade-off for
gaining an understanding of what is ANSI-standard vs.
database-specific SQL. It is likely that when students move
to the real world and settle in with one database, they will
quickly be able to internalize the syntax.
Some students expressed that they would have preferred
going through all labs with one database and then looking at
differences with a second database. However, other students
liked working with both databases side by side. This suggests
that the alternating structure may need to be tweaked. But
whatever one does, it will probably not work with every
learning style.
After using both databases, almost all students, when given a
choice, used SQL Server. This was solely because of a
perceived superiority in the user interface of Query Analyzer
versus SQL Plus. In itself this is not a bad thing because one
of the goals was to understand the differences in the two
databases. But in future semesters the instructor may want to
force a choice more often to insure that students will be
exposed more equally to both.
These "kinks" aside, both instructor and students considered the
experiment a success. Their papers indicated a mature
appreciation of the differences between Oracle and SQL Server
and how they compared to other databases. Their comments in
class indicated that they understood what was ANSI-standard
SQL and what was not. In the SQL lab exam these students
performed as well as previous classes of students, indicating that
the cross-engine approach did not hinder their learning. Students
also indicated enthusiasm for being able to list experience in both
databases on their resume. Compared to prior semesters, students
left the course more comfortable and able to use either of these
major database engines to accomplish the goals of a given
information system.
REFERENCES
[1]
Gorman, Michael M. (2001). Is SQL a real standard
anymore? The Data Administration Newsletter (TDAN.com)
http://www.tdan.com/i016hy01.htm (17 Apr. 2003).
[2]
Gulutzan, Peter. (2002). Standard SQL.
http://www.dbazine.com/gulutzan3.html (17 Apr. 2003).
[3]
Wong, Wylie. (2002). IBM passes Oracle in database
market.
http://techupdate.zdnet.com/techupdate/stories/main/0,14179
,2864350,00.html (19 June 2003).
236
| SQL;SQL Server;training vs. education;database systems;Oracle;database;educational fundamentals;student feedbacks;ANSI-Standard SQL;teaching in IT;dual environment;SQL Language;course design;practical results |
189 | The Maximum Entropy Method for Analyzing Retrieval Measures | We present a model, based on the maximum entropy method, for analyzing various measures of retrieval performance such as average precision, R-precision, and precision-at-cutoffs. Our methodology treats the value of such a measure as a constraint on the distribution of relevant documents in an unknown list, and the maximum entropy distribution can be determined subject to these constraints. For good measures of overall performance (such as average precision), the resulting maximum entropy distributions are highly correlated with actual distributions of relevant documents in lists as demonstrated through TREC data; for poor measures of overall performance, the correlation is weaker. As such, the maximum entropy method can be used to quantify the overall quality of a retrieval measure. Furthermore, for good measures of overall performance (such as average precision), we show that the corresponding maximum entropy distributions can be used to accurately infer precision-recall curves and the values of other measures of performance, and we demonstrate that the quality of these inferences far exceeds that predicted by simple retrieval measure correlation, as demonstrated through TREC data. | INTRODUCTION
The efficacy of retrieval systems is evaluated by a number
of performance measures such as average precision, R-precision
, and precisions at standard cutoffs. Broadly speaking
, these measures can be classified as either system-oriented
measures of overall performance (e.g., average precision and
R-precision) or user-oriented measures of specific performance
(e.g., precision-at-cutoff 10) [3, 12, 5]. Different measures
evaluate different aspects of retrieval performance, and
much thought and analysis has been devoted to analyzing
the quality of various different performance measures [10, 2,
17].
We consider the problem of analyzing the quality of various
measures of retrieval performance and propose a model
based on the maximum entropy method for evaluating the
quality of a performance measure. While measures such as
average precision at relevant documents, R-precision, and
11pt average precision are known to be good measures of
overall performance, other measures such as precisions at
specific cutoffs are not. Our goal in this work is to develop
a model within which one can numerically assess the overall
quality of a given measure based on the reduction in uncertainty
of a system's performance one gains by learning
the value of the measure. As such, our evaluation model
is primarily concerned with assessing the relative merits of
system-oriented measures, but it can be applied to other
classes of measures as well.
We begin with the premise that the quality of a list of
documents retrieved in response to a given query is strictly
a function of the sequence of relevant and non-relevant documents
retrieved within that list (as well as R, the total number
of relevant documents for the given query). Most standard
measures of retrieval performance satisfy this premise.
Our thesis is then that given the assessed value of a "good"
overall measure of performance, one's uncertainty about the
sequence of relevant and non-relevant documents in an unknown
list should be greatly reduced. Suppose, for example
, one were told that a list of 1,000 documents retrieved in
response to a query with 200 total relevant documents contained
100 relevant documents. What could one reasonably
infer about the sequence of relevant and non-relevant documents
in the unknown list? From this information alone,
one could only reasonably conclude that the likelihood of
seeing a relevant document at any rank level is uniformly
1/10. Now suppose that one were additionally told that the
average precision of the list was 0.4 (the maximum possi-27
ble in this circumstance is 0.5). Now one could reasonably
conclude that the likelihood of seeing relevant documents at
low numerical ranks is much greater than the likelihood of
seeing relevant documents at high numerical ranks. One's
uncertainty about the sequence of relevant and non-relevant
documents in the unknown list is greatly reduced as a consequence
of the strong constraint that such an average precision
places on lists in this situation. Thus, average precision
is highly informative. On the other hand, suppose that one
were instead told that the precision of the documents in
the rank range [100, 110] was 0.4. One's uncertainty about
the sequence of relevant and non-relevant documents in the
unknown list is not appreciably reduced as a consequence
of the relatively weak constraint that such a measurement
places on lists. Thus, precision in the range [100, 110] is not
a highly informative measure. In what follows, we develop
a model within which one can quantify how informative a
measure is.
We consider two questions: (1) What can reasonably be
inferred about an unknown list given the value of a measurement
taken over this list? (2) How accurately do these
inferences reflect reality? We argue that the former question
is properly answered by considering the maximum entropy
distributions subject to the measured value as a constraint,
and we demonstrate that such maximum entropy models
corresponding to good overall measures of performance such
as average precision yield accurate inferences about underlying
lists seen in practice (as demonstrated through TREC
data).
More specifically, we develop a framework based on the
maximum entropy method which allows one to infer the
most "reasonable" model for the sequence of relevant and
non-relevant documents in a list given a measured constraint.
From this model, we show how one can infer the most "reasonable"
model for the unknown list's entire precision-recall
curve. We demonstrate through the use of TREC data that
for "good" overall measures of performance (such as average
precision), these inferred precision-recall curves are accurate
approximations of actual precision-recall curves; however,
for "poor" overall measures of performance, these inferred
precision-recall curves do not accurately approximate actual
precision-recall curves. Thus, maximum entropy modeling
can be used to quantify the quality of a measure of overall
performance.
We further demonstrate through the use of TREC data
that the maximum entropy models corresponding to "good"
measures of overall performance can be used to make accurate
predictions of other measurements. While it is well
known that "good" overall measures such as average precision
are well correlated with other measures of performance,
and thus average precision could be used to reasonably predict
other measures of performance, we demonstrate that
the maximum entropy models corresponding to average precision
yield inferences of other measures even more highly
correlated with their actual values, thus validating both average
precision and maximum entropy modeling.
In the sections that follow, we first describe the maximum
entropy method and discuss how maximum entropy
modeling can be used to analyze measures of retrieval performance
.
We then describe the results of applying our
methodology using TREC data, and we conclude with a
summary and future work.
THE MAXIMUM ENTROPY METHOD
The concept of entropy as a measure of information was
first introduced by Shannon [20], and the Principle of Maximum
Entropy was introduced by Jaynes [7, 8, 9]. Since its
introduction, the Maximum Entropy Method has been applied
in many areas of science and technology [21] including
natural language processing [1], ambiguity resolution [18],
text classification [14], machine learning [15, 16], and information
retrieval [6, 11], to name but a few examples. In
what follows, we introduce the maximum entropy method
through a classic example, and we then describe how the
maximum entropy method can be used to evaluate measures
of retrieval performance.
Suppose you are given an unknown and possibly biased
six-sided die and were asked the probability of obtaining any
particular die face in a given roll. What would your answer
be? This problem is under-constrained and the most seemingly
"reasonable" answer is a uniform distribution over all
faces. Suppose now you are also given the information that
the average die roll is 3.5. The most seemingly "reasonable"
answer is still a uniform distribution. What if you are told
that the average die roll is 4.5? There are many distributions
over the faces such that the average die roll is 4.5; how
can you find the most seemingly "reasonable" distribution?
Finally, what would your answer be if you were told that
the average die roll is 5.5? Clearly, the belief in getting a
6 increases as the expected value of the die rolls increases.
But there are many distributions satisfying this constraint;
which distribution would you choose?
The "Maximum Entropy Method" (MEM) dictates the
most "reasonable" distribution satisfying the given constraints.
The "Principle of Maximal Ignorance" forms the intuition
behind the MEM; it states that one should choose the distribution
which is least predictable (most random) subject
to the given constraints. Jaynes and others have derived numerous
entropy concentration theorems which show that the
vast majority of all empirical frequency distributions (e.g.,
those corresponding to sequences of die rolls) satisfying the
given constraints have associated empirical probabilities and
entropies very close to those probabilities satisfying the constraints
whose associated entropy is maximal [7].
Thus, the MEM dictates the most random distribution
satisfying the given constraints, using the entropy of the
probability distribution as a measure of randomness. The
entropy of a probability distribution p = {p
1
, p
2
, . . . , p
n
} is
a measure of the uncertainty (randomness) inherent in the
distribution and is defined as follows
H(p) = n
X
i=1
p
i
lg p
i
.
Thus, maximum entropy distributions are probability distributions
making no additional assumptions apart from the
given constraints.
In addition to its mathematical justification, the MEM
tends to produce solutions one often sees in nature. For
example, it is known that given the temperature of a gas, the
actual distribution of velocities in the gas is the maximum
entropy distribution under the temperature constraint.
We can apply the MEM to our die problem as follows.
Let the probability distribution over the die faces be p =
{p
1
, . . . , p
6
}. Mathematically, finding the maximum entropy
distribution over die faces such that the expected die roll is
28
1
2
3
4
5
6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
die face
probability
1
2
3
4
5
6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
die face
probability
1
2
3
4
5
6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
die face
probability
Figure 1: Maximum entropy die distributions with mean die rolls of 3.5, 4.5, and 5.5, respectively.
d corresponds to the following optimization problem:
Maximize: H(p)
Subject to:
1.
6
P
i=1
p
i
= 1
2.
6
P
i=1
i p
i
= d
The first constraint ensures that the solution forms a distribution
over the die faces, and the second constraint ensures
that this distribution has the appropriate expectation. This
is a constrained optimization problem which can be solved
using the method of Lagrange multipliers. Figure 1 shows
three different maximum entropy distributions over the die
faces such that the expected die roll is 3.5, 4.5, and 5.5,
respectively.
2.1
Application of the Maximum Entropy
Method to Analyzing Retrieval Measures
Suppose that you were given a list of length N corresponding
to the output of a retrieval system for a given query,
and suppose that you were asked to predict the probability
of seeing any one of the 2
N
possible patterns of relevant
documents in that list. In the absence of any information
about the query, any performance information for the system
, or any a priori modeling of the behavior of retrieval
systems, the most "reasonable" answer you could give would
be that all lists of length N are equally likely. Suppose now
that you are also given the information that the expected
number of relevant documents over all lists of length N is
R
ret
. Your "reasonable" answer might then be a uniform
distribution over all `
N
R
ret
different possible lists with R
ret
relevant documents. But what if apart from the constraint
on the number of relevant documents retrieved, you were
also given the constraint that the expected value of average
precision is ap? If the average precision value is high,
then of all the `
N
R
ret
lists with R
ret
relevant documents,
the lists in which the relevant documents are retrieved at
low numerical ranks should have higher probabilities. But
how can you determine the most "reasonable" such distribution
? The maximum entropy method essentially dictates the
most reasonable distribution as a solution to the following
constrained optimization problem.
Let p(r
1
, ..., r
N
) be a probability distribution over the
relevances associated with document lists of length N , let
rel(r
1
, ..., r
N
) be the number of relevant documents in a list,
and let ap(r
1
, ..., r
N
) be the average precision of a list. Then
the maximum entropy method can be mathematically formulated
as follows:
Maximize: H(p)
Subject to:
1.
P
r
1
,...,r
N
p(r
1
, . . . , r
N
) = 1
2.
P
r
1
,...,r
N
ap(r
1
, . . . , r
N
) p(r
1
, . . . , r
N
) = ap
3.
P
r
1
,...,r
N
rel(r
1
, . . . , r
N
) p(r
1
, . . . , r
N
) = R
ret
Note that the solution to this optimization problem is a
distribution over possible lists, where this distribution effectively
gives one's a posteriori belief in any list given the
measured constraint.
The previous problem can be formulated in a slightly different
manner yielding another interpretation of the problem
and a mathematical solution. Suppose that you were given
a list of length N corresponding to output of a retrieval system
for a given a query, and suppose that you were asked
to predict the probability of seeing a relevant document at
some rank. Since there are no constraints, all possible lists
of length N are equally likely, and hence the probability of
seeing a relevant document at any rank is 1/2. Suppose now
that you are also given the information that the expected
number of relevant documents over all lists of length N is
R
ret
. The most natural answer would be a R
ret
/N uniform
probability for each rank.
Finally, suppose that you are
given the additional constraint that the expected average
precision is ap. Under the assumption that our distribution
over lists is a product distribution (this is effectively
a fairly standard independence assumption), we may solve
this problem as follows. Let
p(r
1
, . . . , r
N
) = p(r
1
) p(r
2
) p(r
N
)
where p(r
i
) is the probability that the document at rank i is
relevant. We can then solve the problem of calculating the
probability of seeing a relevant document at any rank using
the MEM. For notational convenience, we will refer to this
product distribution as the probability-at-rank distribution
and the probability of seeing a relevant document at rank i,
p(r
i
), as p
i
.
Standard results from information theory [4] dictate that
if p(r
1
, . . . , r
N
) is a product distribution, then
H(p(r
1
, . . . , r
N
)) =
N
X
i=1
H(p
i
)
where H(p
i
) is the binary entropy
H(p
i
) = -p
i
lg p
i
- (1 - p
i
) lg(1 - p
i
).
Furthermore, it can be shown that given a product distribution
p(r
1
, . . . , r
N
) over the relevances associated with docu-29
Maximize: P
N
i=1
H(p
i
)
Subject to:
1.
1
R
N
P
i=1
`
p
i
i
`1 +
i-1
P
j=1
p
j
= ap
2.
N
P
i=1
p
i
= R
ret
Figure 2: Maximum entropy
setup for average precision.
Maximize: P
N
i=1
H(p
i
)
Subject to:
1.
1
R
R
P
i=1
p
i
= rp
2.
N
P
i=1
p
i
= R
ret
Figure 3: Maximum entropy
setup for R-precision.
Maximize: P
N
i=1
H(p
i
)
Subject to:
1.
1
k
k
P
i=1
p
i
= PC (k)
2.
N
P
i=1
p
i
= R
ret
Figure 4: Maximum entropy
setup for precision-at-cutoff.
ment lists of length N , the expected value of average precision
is
1
R
N
X
i=1
p
i
i
1 +
i-1
X
j=1
p
j
!!
.
(1)
(The derivation of this formula is omitted due to space constraints
.) Furthermore, since p
i
is the probability of seeing
a relevant document at rank i, the expected number of relevant
documents retrieved until rank N is P
N
i=1
p
i
.
Now, if one were given some list of length N , one were told
that the expected number of relevant documents is R
ret
, one
were further informed that the expected average precision is
ap, and one were asked the probability of seeing a relevant
document at any rank under the independence assumption
stated, one could apply the MEM as shown in Figure 2.
Note that one now solves for the maximum entropy product
distribution over lists, which is equivalent to a maximum entropy
probability-at-rank distribution. Applying the same
ideas to R-precision and precision-at-cutoff k, one obtains
analogous formulations as shown in Figures 3 and 4, respectively
.
All of these formulations are constrained optimization problems
, and the method of Lagrange multipliers can be used
to find an analytical solution, in principle. When analytical
solutions cannot be determined, numerical optimization
methods can be employed. The maximum entropy distributions
for R-precision and precision-at-cutoff k can be obtained
analytically using the method of Lagrange multipliers
. However, numerical optimization methods are required
to determine the maximum entropy distribution for average
precision. In Figure 5, examples of maximum entropy
probability-at-rank curves corresponding to the measures
average precision, R-precision, and precision-at-cutoff 10 for
a run in TREC8 can be seen. Note that the probability-at
-rank curves are step functions for the precision-at-cutoff
and R-precision constraints; this is as expected since, for
example, given a precision-at-cutoff 10 of 0.3, one can only
reasonably conclude a uniform probability of 0.3 for seeing
a relevant document at any of the first 10 ranks.
Note,
however, that the probability-at-rank curve corresponding
to average precision is smooth and strictly decreasing.
Using the maximum entropy probability-at-rank distribution
of a list, we can infer the maximum entropy precision-recall
curve for the list. Given a probability-at-rank distribution
p, the number of relevant documents retrieved until
rank i is REL(i) = P
i
j=1
p
j
. Therefore, the precision
and recall at rank i are PC (i) = REL(i)/i and REC (i) =
REL(i)/R. Hence, using the maximum entropy probability-0
100
200
300
400
500
600
700
800
900
1000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
rank
probability
TREC8 System fub99a Query 435 AP = 0.1433
ap maxent dist.
rp maxent dist.
pc-10 maxent dist.
Figure 5: Probability-at-rank distributions.
at-rank distribution for each measure, we can generate the
maximum entropy precision-recall curve of the list. If a measure
provides a great deal of information about the underlying
list, then the maximum entropy precision-recall curve
should approximate the precision-recall curve of the actual
list.
However, if a measure is not particularly informative
, then the maximum entropy precision-recall curve need
not approximate the actual precision-recall curve. Therefore
, noting how closely the maximum entropy precision-recall
curve corresponding to a measure approximates the
precision-recall curve of the actual list, we can calculate how
much information a measure contains about the actual list,
and hence how "informative" a measure is. Thus, we have a
methodology for evaluating the evaluation measures themselves
.
Using the maximum entropy precision-recall curve of a
measure, we can also predict the values of other measures.
For example, using the maximum entropy precision-recall
curve corresponding to average precision, we can predict
the precision-at-cutoff 10. For highly informative measures,
these predictions should be very close to reality. Hence, we
have a second way of evaluating evaluation measures.
EXPERIMENTAL RESULTS
We tested the performance of the evaluation measures average
precision, R-precision, and precision-at-cutoffs 5, 10,
15, 20, 30, 100, 200, 500 and 1000 using data from TRECs
3, 5, 6, 7, 8 and 9. For any TREC and any query, we chose
those systems whose number of relevant documents retrieved
was at least 10 in order to have a sufficient number of points
on the precision-recall curve. We then calculated the maximum
entropy precision-recall curve subject to the given measured
constraint, as described above. The maximum entropy
precision-recall curve corresponding to an average precision
30
constraint cannot be determined analytically; therefore, we
used numerical optimization
1
to find the maximum entropy
distribution corresponding to average precision.
We shall refer to the execution of a retrieval system on
a particular query as a run. Figure 6 shows examples of
maximum entropy precision-recall curves corresponding to
average precision, R-precision, and precision-at-cutoff 10 for
three different runs, together with the actual precision-recall
curves. We focused on these three measures since they are
perhaps the most commonly cited measures in IR. We also
provide results for precision-at-cutoff 100 in later plots and
detailed results for all measures in a later table. As can be
seen in Figure 6, using average precision as a constraint, one
can generate the actual precision-recall curve of a run with
relatively high accuracy.
In order to quantify how good an evaluation measure is
in generating the precision-recall curve of an actual list,
we consider two different error measures: the root mean
squared error (RMS) and the mean absolute error (MAE).
Let {
1
,
2
, . . . ,
R
ret
} be the precisions at the recall levels
{1/R, 2/R, . . . , R
ret
/R} where R
ret
is the number of relevant
documents retrieved by a system and R is the number of
documents relevant to the query, and let {m
1
, m
2
, . . . , m
R
ret
}
be the estimated precisions at the corresponding recall levels
for a maximum entropy distribution corresponding to a
measure. Then the MAE and RMS errors are calculated as
follows.
RMS
=
v
u
u
t
1
R
ret
R
ret
X
i=1
(
i
- m
i
)
2
MAE
=
1
R
ret
R
ret
X
i=1
|
i
- m
i
|
The points after recall R
ret
/R on the precision-recall curve
are not considered in the evaluation of the MAE and RMS
errors since, by TREC convention, the precisions at these
recall levels are assumed to be 0.
In order to evaluate how good a measure is at inferring
actual precision-recall curves, we calculated the MAE and
RMS errors of the maximum entropy precision-recall curves
corresponding to the measures in question, averaged over all
runs for each TREC. Figure 7 shows how the MAE and RMS
errors for average precision, R-precision, precision-at-cutoff
10, and precision-at-cutoff 100 compare with each other for
each TREC. The MAE and RMS errors follow the same
pattern over all TRECs. Both errors are consistently and
significantly lower for average precision than for the other
measures in question, while the errors for R-precision are
consistently lower than for precision-at-cutoffs 10 and 100.
Table 1 shows the actual values of the RMS errors for all
measures over all TRECs. In our experiments, MAE and
RMS errors follow a very similar pattern, and we therefore
omit MAE results due to space considerations. From this
table, it can be seen that average precision has consistently
lower RMS errors when compared to the other measures.
The penultimate column of the table shows the average RMS
errors per measure averaged over all TRECs. On average,
R-precision has the second lowest RMS error after average
precision, and precision-at-cutoff 30 is the third best measure
in terms of RMS error. The last column of the table
1
We used the TOMLAB Optimization Environment for
Matlab.
shows the percent increase in the average RMS error of a
measure when compared to the RMS error of average precision
. As can be seen, the average RMS errors for the other
measures are substantially greater than the average RMS
error for average precision.
We now consider a second method for evaluating how informative
a measure is. A highly informative measure should
properly reduce one's uncertainty about the distribution of
relevant and non-relevant documents in a list; thus, in our
maximum entropy formulation, the probability-at-rank distribution
should closely correspond to the pattern of relevant
and non-relevant documents present in the list. One
should then be able to accurately predict the values of other
measures from this probability-at-rank distribution.
Given a probability-at-rank distribution p
1
, p
2
, . . . , p
N
, we
can predict average precision, R-precision and precision-at-cutoff
k values as follows:
ap = 1
R
N
X
i=1
p
i
i
1 +
i-1
X
j=1
p
j
!!
rp = 1
R
R
X
i=1
p
i
PC (k) = 1
k
k
X
i=1
p
i
The plots in the top row of Figures 8 and 9 show how average
precision is actually correlated with R-precision, precision-at
-cutoff 10, and precision-at-cutoff 100 for TRECs 6 and 8,
respectively. Each point in the plot corresponds to a system
and the values of the measures are averaged over all
queries. Using these plots as a baseline for comparison, the
plots in the bottom row of the figures show the correlation
between the actual measures and the measures predicted
using the average precision maximum entropy probability-at
-rank distribution. Consider predicting precision-at-cutoff
10 values using the average precision maximum entropy distributions
in TREC 6. Without applying the maximum entropy
method, Figure 8 shows that the two measures are
correlated with a Kendall's value of 0.671. However, the
precision-at-cutoff 10 values inferred from the average precision
maximum entropy distribution have a Kendall's
value of 0.871 when compared to actual precisions-at-cutoff
10. Hence, the predicted precision-at-cutoff 10 and actual
precision-at-cutoff 10 values are much more correlated than
the actual average precision and actual precision-at-cutoff 10
values. Using a similar approach for predicting R-precision
and precision-at-cutoff 100, it can be seen in Figures 8 and 9
that the measured values predicted by using average precision
maximum entropy distributions are highly correlated
with actual measured values.
We conducted similar experiments using the maximum
entropy distributions corresponding to other measures, but
since these measures are less informative, we obtained much
smaller increases (and sometimes even decreases) in inferred
correlations. (These results are omitted due to space considerations
.) Table 2 summarizes the correlation improvements
possible using the maximum entropy distribution corresponding
to average precision. The row labeled
act
gives
the actual Kendall's correlation between average precision
and the measure in the corresponding column.
The row
labeled
inf
gives the Kendall's correlation between the
31
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
recall
precision
TREC8 System fub99a Query 435 AP = 0.1433
actual prec-recall
ap maxent prec-recall
rp maxent prec-recall
pc-10 maxent prec-recall
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
recall
precision
TREC8 System MITSLStd Query 404 AP = 0.2305
actual prec-recall
ap maxent prec-recall
rp maxent prec-recall
pc-10 maxent prec-recall
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
recall
precision
TREC8 System pir9At0 Query 446 AP = 0.4754
actual prec-recall
ap maxent prec-recall
rp maxent prec-recall
pc-10 maxent prec-recall
Figure 6: Inferred precision-recall curves and actual precision-recall curve for three runs in TREC8.
TREC3
TREC5
TREC6
TREC7
TREC8
TREC9
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.22
Mean Absolute Error
ap maxent prec-recall
rp maxent prec-recall
pc-10 maxent prec-recall
pc-100 maxent prec-recall
TREC3
TREC5
TREC6
TREC7
TREC8
TREC9
0.1
0.15
0.2
0.25
RMS Error
ap maxent prec-recall
rp maxent prec-recall
pc-10 maxent prec-recall
pc-100 maxent prec-recall
Figure 7: MAE and RMS errors for inferred precision-recall curves over all TRECs.
TREC3
TREC5
TREC6
TREC7
TREC8
TREC9
AVERAGE
%INC
AP
0.1185
0.1220
0.1191
0.1299
0.1390
0.1505
0.1298
RP
0.1767
0.1711
0.1877
0.2016
0.1878
0.1630
0.1813
39.7
PC-5
0.2724
0.2242
0.2451
0.2639
0.2651
0.2029
0.2456
89.2
PC-10
0.2474
0.2029
0.2183
0.2321
0.2318
0.1851
0.2196
69.1
PC-15
0.2320
0.1890
0.2063
0.2132
0.2137
0.1747
0.2048
57.8
PC-20
0.2210
0.1806
0.2005
0.2020
0.2068
0.1701
0.1968
51.6
PC-30
0.2051
0.1711
0.1950
0.1946
0.2032
0.1694
0.1897
46.1
PC-100
0.1787
0.1777
0.2084
0.2239
0.2222
0.1849
0.1993
53.5
PC-200
0.1976
0.2053
0.2435
0.2576
0.2548
0.2057
0.2274
75.2
PC-500
0.2641
0.2488
0.2884
0.3042
0.3027
0.2400
0.2747
111.6
PC-1000
0.3164
0.2763
0.3134
0.3313
0.3323
0.2608
0.3051
135.0
Table 1: RMS error values for each TREC.
TREC3
TREC5
TREC6
RP
PC-10
PC-100
RP
PC-10
PC-100
RP
PC-10
PC-100
act
0.921
0.815
0.833
0.939
0.762
0.868
0.913
0.671
0.807
inf
0.941
0.863
0.954
0.948
0.870
0.941
0.927
0.871
0.955
%Inc
2.2
5.9
14.5
1.0
14.2
8.4
1.5
29.8
18.3
TREC7
TREC8
TREC9
RP
PC-10
PC-100
RP
PC-10
PC-100
RP
PC-10
PC-100
act
0.917
0.745
0.891
0.925
0.818
0.873
0.903
0.622
0.836
inf
0.934
0.877
0.926
0.932
0.859
0.944
0.908
0.757
0.881
%Inc
1.9
17.7
3.9
0.8
5.0
8.1
0.6
21.7
5.4
Table 2: Kendall's correlations and percent improvements for all TRECs.
32
0
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Actual RP
Actual AP
TREC 6 Actual RP vs Actual AP
Kendall's
= 0.913
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Actual PC-10
Actual AP
TREC 6 Actual PC-10 vs Actual AP
Kendall's
= 0.671
0
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Actual PC-100
Actual AP
TREC 6 Actual PC-100 vs Actual AP
Kendall's
= 0.807
0
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Actual RP
Inferred RP
TREC 6 Actual RP vs Inferred RP
Kendall's
= 0.927
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Actual PC-10
Inferred PC-10
TREC 6 Actual PC-10 vs Inferred PC-10
Kendall's
= 0.871
0
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Actual PC-100
Inferred PC-100
TREC 6 Actual PC-100 vs Inferred PC-100
Kendall's
= 0.955
Figure 8: Correlation improvements, TREC6.
measure inferred from the maximum entropy distribution
corresponding to average precision and the measure in the
corresponding column. The row labeled %Inc gives the percent
increase in correlation due to maximum entropy modeling
. As can be seen, maximum entropy modeling yields
great improvements in the predictions of precision-at-cutoff
values. The improvements in predicting R-precision are no-ticeably
smaller, though this is largely due to the fact that
average precision and R-precision are quite correlated to begin
with.
CONCLUSIONS AND FUTURE WORK
We have described a methodology for analyzing measures
of retrieval performance based on the maximum entropy
method, and we have demonstrated that the maximum entropy
models corresponding to "good" measures of overall
performance such as average precision accurately reflect underlying
retrieval performance (as measured by precision-recall
curves) and can be used to accurately predict the values
of other measures of performance, well beyond the levels
dictated by simple correlations.
The maximum entropy method can be used to analyze
other measures of retrieval performance, and we are presently
conducting such studies. More interestingly, the maximum
entropy method could perhaps be used to help develop and
gain insight into potential new measures of retrieval performance
. Finally, the predictive quality of maximum entropy
models corresponding to average precision suggest that if
one were to estimate some measure of performance using an
incomplete judgment set, that measure should be average
precision--from the maximum entropy model corresponding
to that measure alone, one could accurately infer other
measures of performance.
Note that the concept of a "good" measure depends on
the purpose of evaluation. In this paper, we evaluate measures
based on how much information they provide about
the overall performance of a system (a system-oriented evaluation
). However, in different contexts, different measures
may be more valuable and useful, such as precision-at-cutoff
10 in web search (a user-oriented evaluation). R-precision
and average precision are system-oriented measures, whereas
precision-at-cutoff k is typically a user-oriented measure.
Another important conclusion of our work is that one can accurately
infer user-oriented measures from system-oriented
measures, but the opposite is not true.
Apart from evaluating the information captured by a single
measure, we could use the MEM to evaluate the information
contained in combinations of measures. How much does
knowing the value of precision-at-cutoff 10 increase one's
knowledge of a system's performance beyond simply knowing
the system's average precision? Which is more informative
: knowing R-precision and precision-at-cutoff 30, or
knowing average precision and precision-at-cutoff 100? Such
questions can be answered, in principle, using the MEM.
Adding the values of one or more measures simply adds one
or more constraints to the maximum entropy model, and
one can then assess the informativeness of the combination.
Note that TREC reports many different measures. Using
the MEM, one might reasonably be able to conclude which
are the most informative combinations of measures.
REFERENCES
[1] A. L. Berger, V. D. Pietra, and S. D. Pietra. A
maximum entropy approach to natural language
processing. Comput. Linguist., 22:3971, 1996.
[2] C. Buckley and E. Voorhees. Evaluating evaluation
measure stability. In SIGIR '00: Proceedings of the
23rd annual international ACM SIGIR conference on
Research and development in information retrieval,
pages 3340. ACM Press, 2000.
[3] W. S. Cooper. On selecting a measure of retrieval
effectiveness. part i. In Readings in information
retrieval, pages 191204. Morgan Kaufmann
Publishers Inc., 1997.
33
0
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Actual RP
Actual AP
TREC 8 Actual RP vs Actual AP
Kendall's
= 0.925
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Actual PC-10
Actual AP
TREC 8 Actual PC-10 vs Actual AP
Kendall's
= 0.818
0
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Actual PC-100
Actual AP
TREC 8 Actual PC-100 vs Actual AP
Kendall's
= 0.873
0
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Actual RP
Inferred RP
TREC 8 Actual RP vs Inferred RP
Kendall's
= 0.932
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Actual PC-10
Inferred PC-10
TREC 8 Actual PC-10 vs Inferred PC-10
Kendall's
= 0.859
0
0.1
0.2
0.3
0.4
0.5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Actual PC-100
Inferred PC-100
TREC 8 Actual PC-100 vs Inferred PC-100
Kendall's
= 0.944
Figure 9: Correlation improvements, TREC8.
[4] T. M. Cover and J. Thomas. Elements of Information
Theory. John Wiley & sons, 1991.
[5] B. Dervin and M. S. Nilan. Information needs and use.
In Annual Review of Information Science and
Technology, volume 21, pages 333, 1986.
[6] W. R. Greiff and J. Ponte. The maximum entropy
approach and probabilistic ir models. ACM Trans. Inf.
Syst., 18(3):246287, 2000.
[7] E. Jaynes. On the rationale of maximum entropy
methods. In Proc.IEEE, volume 70, pages 939952,
1982.
[8] E. T. Jaynes. Information theory and statistical
mechanics: Part i. Physical Review 106, pages
620630, 1957a.
[9] E. T. Jaynes. Information theory and statistical
mechanics: Part ii. Physical Review 108, page 171,
1957b.
[10] Y. Kagolovsky and J. R. Moehr. Current status of the
evaluation of information retrieval. J. Med. Syst.,
27(5):409424, 2003.
[11] P. B. Kantor and J. Lee. The maximum entropy
principle in information retrieval. In SIGIR '86:
Proceedings of the 9th annual international ACM
SIGIR conference on Research and development in
information retrieval, pages 269274. ACM Press,
1986.
[12] D. D. Lewis. Evaluating and optimizing autonomous
text classification systems. In SIGIR '95: Proceedings
of the 18th annual international ACM SIGIR
conference on Research and development in
information retrieval, pages 246254. ACM Press,
1995.
[13] R. M. Losee. When information retrieval measures
agree about the relative quality of document rankings.
J. Am. Soc. Inf. Sci., 51(9):834840, 2000.
[14] K. Nigam, J. Lafferty, and A. McCallum. Using
maximum entropy for text classification. In IJCAI-99
Workshop on Machine Learning for Information
Filtering, pages 6167, 1999.
[15] D. Pavlov, A. Popescul, D. M. Pennock, and L. H.
Ungar. Mixtures of conditional maximum entropy
models. In T. Fawcett and N. Mishra, editors, ICML,
pages 584591. AAAI Press, 2003.
[16] S. J. Phillips, M. Dudik, and R. E. Schapire. A
maximum entropy approach to species distribution
modeling. In ICML '04: Twenty-first international
conference on Machine learning, New York, NY, USA,
2004. ACM Press.
[17] V. Raghavan, P. Bollmann, and G. S. Jung. A critical
investigation of recall and precision as measures of
retrieval system performance. ACM Trans. Inf. Syst.,
7(3):205229, 1989.
[18] A. Ratnaparkhi and M. P. Marcus. Maximum entropy
models for natural language ambiguity resolution,
1998.
[19] T. Saracevic. Evaluation of evaluation in information
retrieval. In SIGIR '95: Proceedings of the 18th
annual international ACM SIGIR conference on
Research and development in information retrieval,
pages 138146. ACM Press, 1995.
[20] C. E. Shannon. A mathematical theory of
communication. The Bell System Technical Journal
27, pages 379423 & 623656, 1948.
[21] N. Wu. The Maximum Entropy Method. Springer, New
York, 1997.
34
| Average Precision;Evaluation;Maximum Entropy |
19 | A Similarity Measure for Motion Stream Segmentation and Recognition | Recognition of motion streams such as data streams generated by different sign languages or various captured human body motions requires a high performance similarity measure . The motion streams have multiple attributes, and motion patterns in the streams can have different lengths from those of isolated motion patterns and different attributes can have different temporal shifts and variations. To address these issues, this paper proposes a similarity measure based on singular value decomposition (SVD) of motion matrices . Eigenvector differences weighed by the corresponding eigenvalues are considered for the proposed similarity measure . Experiments with general hand gestures and human motion streams show that the proposed similarity measure gives good performance for recognizing motion patterns in the motion streams in real time. | INTRODUCTION
Motion streams can be generated by continuously performed
sign language words [14] or captured human body
motions such as various dances. Captured human motions
can be applied to the movie and computer game industries
by reconstructing various motions from video sequences [10]
or images [15] or from motions captured by motion capture
systems [4]. Recognizing motion patterns in the streams
with unsupervised methods requires no training process, and
is very convenient when new motions are expected to be
added to the known pattern pools. A similarity measure
with good performance is thus necessary for segmenting and
recognizing the motion streams. Such a similarity measure
needs to address some new challenges posed by real world
Work supported partially by the National Science Foundation
under Grant No. 0237954 for the project CAREER:
Animation Databases.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
Copyright 200X ACM X-XXXXX-XX-X/XX/XX ...
$
5.00.
motion streams: first, the motion patterns have dozens of attributes
, and similar patterns can have different lengths due
to different motion durations; second, different attributes of
similar motions have different variations and different temporal
shifts due to motion variations; and finally, motion
streams are continuous, and there are no obvious "pauses"
between neighboring motions in a stream. A good similarity
measure not only needs to capture the similarity of complete
motion patterns, but also needs to capture the differences
between complete motion patterns and incomplete motion
patterns or sub-patterns in order to segment a stream for
motion recognition.
As the main contribution of this paper, we propose a similarity
measure to address the above issues. The proposed
similarity measure is defined based on singular value decomposition
of the motion matrices. The first few eigenvectors
are compared for capturing the similarity of two matrices,
and the inner products of the eigenvectors are given different
weights for their different contributions. We propose to
use only the eigenvalues corresponding to the involved eigenvectors
of the two motion matrices as weights. This simple
and intuitive weighing strategy gives the same importance to
eigenvalues of the two matrices. We also show that the 95%
variance rule for choosing the number of eigenvectors [13] is
not sufficient for recognizing both isolated patterns and motion
streams. Our experiments demonstrate that at least the
first 6 eigenvectors need to be considered for motion streams
of either 22 attribute or 54 attributes, and the first 6 eigenvalues
accounts for more than 99.5% of the total variance in
the motion matrices.
RELATED WORK
Multi-attribute pattern similarity search, especially in continuous
motion streams, has been widely studied for sign
language recognition and for motion synthesis in computer
animation. The recognition methods usually include template
matching by distance measures and hidden Markov
models (HMM).
Template matching by using similarity/distance measures
has been employed for multi-attribute pattern recognition.
Joint angles are extracted in [11] as features to represent different
human body static poses for the Mahalanobis distance
measure of two joint angle features. Similarly, momentum,
kinetic energy and force are constructed in [2, 5] as activity
measure and prediction of gesture boundaries for various
segments of the human body, and the Mahalanobis distance
function of two composite features are solved by dynamic
programming.
89
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear
this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
MDM/KDD 2005 Chicago, August 21, Chicago, Illinois, USA
Copyright 2005 ACM -- MDM 2005 - 1-59593-216-X...$5.00.
Similarity measures are defined for multi-attribute data
in [6, 12, 16] based on principal component analysis (PCA).
Inner products or angular differences of principal components
(PCs) are considered for similarity measure definitions
, with different weighted strategies for different PCs.
Equal weights are considered for different combinations of
PCs in [6], giving different PCs equal contributions to the
similarity measure. The similarity measure in [12] takes the
minimum of two weighted sums of PC inner products, and
the two sums are respectively weighted by different weights.
A global weight vector is obtained by taking into account all
available isolated motion patterns in [16], and this weight
vector is used for specifying different contributions from different
PC inner products to the similarity measure Eros.
The dominating first PC and a normalized eigenvalue vector
are considered in [7, 8] for pattern recognition. In contrast,
this paper propose to consider the first few PCs, and the
angular differences or inner products of different PCs are
weighted by different weights which depends on the data
variances along the corresponding PCs.
The HMM technique has been widely used for sign language
recognition, and different recognition rates have been
reported for different sign languages and different feature selection
approaches. Starner et al. [14] achieved 92% and 98%
word accuracy respectively for two systems, the first of the
systems used a camera mounted on a desk and the second
one used a camera in a user's cap for extracting features
as the input of HMM. Similarly Liang and Ouhyoung [9]
used HMM for postures, orientations and motion primitives
as features extracted from continuous Taiwan sign language
streams and an average 80.4% recognition rate was achieved.
In contrast, the approach proposed in this paper is an unsupervised
approach, and no training as required for HMM
recognizers is needed.
SIMILARITY MEASURE FOR MOTION STREAM RECOGNITION
The joint positional coordinates or joint angular values of
a subject in motion can be represented by a matrix: the
columns or attributes of the matrix are for different joints,
and the rows or frames of the matrix are for different time
instants. Similarity of two motions is the similarity of the
resulting motion matrices, which have the same number of
attributes or columns, and yet can have different number
of rows due to different motion durations. To capture the
similarity of two matrices of different lengths, we propose
to apply singular value decomposition (SVD) to the motion
matrices in order to capture the similarity of the matrix
geometric structures. Hence we briefly present SVD and its
associated properties below before proposing the similarity
measure based on SVD in this section.
3.1
Singular Value Decomposition
The geometric structure of a matrix can be revealed by
the SVD of the matrix. As shown in [3], any real m n
matrix A can be decomposed into A = U V
T
, where U =
[u
1
, u
2
, . . . , u
m
] R
mm
and V = [v
1
, v
2
, . . . , v
n
] R
nn
are two orthogonal matrices, and is a diagonal matrix with
diagonal entries being the singular values of A:
1
2
. . .
min(m,n)
0. Column vectors u
i
and v
i
are the i
th
left and right singular vectors of A, respectively.
It can be shown that the right singular vectors of the sym-metric
n n matrix M = A
T
A
are identical to the corresponding
right singular vectors of A, referred to as eigenvectors
of M . The singular values of M , or eigenvalues of M ,
are squares of the corresponding singular values of A. The
eigenvector with the largest eigenvalue gives the first principal
component. The eigenvector with the second largest
eigenvalue is the second principal component and so on.
3.2
Similarity Measure
Since SVD exposes the geometric structure of a matrix, it
can be used for capturing the similarity of two matrices. We
can compute the SVD of M = A
T
A
instead of computing
the SVD of A to save computational time. The reasons are
that the eigenvectors of M are identical to the corresponding
right singular vectors of A, the eigenvalues of M are the
squares of the corresponding singular values of A, and SVD
takes O(n
3
) time for the n n M and takes O(mn
2
) time
with a large constant for the m n A, and usually m > n.
Ideally, if two motions are similar, their corresponding
eigenvectors should be parallel to each other, and their corresponding
eigenvalues should also be proportional to each
other. This is because the eigenvectors are the corresponding
principal components, and the eigenvalues reflect the
variances of the matrix data along the corresponding principal
components. But due to motion variations, all corresponding
eigenvectors cannot be parallel as shown in Figure
1. The parallelness or angular differences of two eigenvectors
u and v can be described by the absolute value of
their inner products: | cos | = |u v|/(|u||v|) = |u v|, where
|u| = |v| = 1. We consider the absolute value of the inner
products because eigenvectors can have different signs
as shown in [8].
Since eigenvalues are numerically related to the variances
of the matrix data along the associated eigenvectors, the importance
of the eigenvector parallelness can be described by
the corresponding eigenvalues. Hence, eigenvalues are to be
used to give different weights to different eigenvector pairs.
Figure 2 shows that the first eigenvalues are the dominating
components of all the eigenvalues, and other eigenvalues
become smaller and smaller and approach zero. As the
eigenvalues are close to zero, their corresponding eigenvectors
can be very different even if two matrices are similar.
Hence not all the eigenvectors need to be incorporated into
the similarity measure.
Since two matrices have two eigenvalues for the corresponding
eigenvector pair, these two eigenvalues should have
equal contributions or weights to the eigenvector parallelness
. In addition, the similarity measure of two matrices
should be independent to other matrices, hence only eigenvectors
and eigenvalues of the two matrices should be considered
.
Based on the above discussions, we propose the following
similarity measure for two matrices Q and P :
(Q, P ) = 1
2
k
i
=1
((
i
/
n
i
=1
i
+
i
/
n
i
=1
i
)|u
i
v
i
|)
where
i
and
i
are the i
th
eigenvalues corresponding to the
i
th
eigenvectors u
i
and v
i
of square matrices of Q and P ,
respectively, and 1 < k < n. Integer k determines how many
eigenvectors are considered and it depends on the number
of attributes n of motion matrices. Experiments with hand
gesture motions (n = 22) and human body motions (n =
90
2
4
6
8
10
12
14
16
18
20
22
-0.7
-0.6
-0.5
-0.4
-0.3
-0.2
-0.1
0
0.1
0.2
Component of First Eigenvector
Component Value of First Eigenvector
Motion341
Motion342
2
4
6
8
10
12
14
16
18
20
22
-0.4
-0.2
0
0.2
0.4
0.6
Component Value of Second Eigenvector
Component of Second Eigenvector
Motion341
Motion342
Figure 1: Eigenvectors of similar patterns. The first
eigenvectors are similar to each other, while other
eigenvectors, such as the second vectors shown in
the bottom, can be quite different.
54) in Section 4 show that k = 6 is large enough without
loss of pattern recognition accuracy in streams. We refer to
this non-metric similarity measure as k Weighted Angular
Similarity (kWAS) , which captures the angular similarities
of the first k corresponding eigenvector pairs weighted by
the corresponding eigenvalues.
It can be easily verified that the value of kWAS ranges over
[0,1]. When all corresponding eigenvectors are normal to
each other, the similarity measure will be zero, and when two
matrices are identical, the similarity measure approaches the
maximum value one if k approaches n.
3.3
Stream Segmentation Algorithm
In order to recognize motion streams, we assume one motion
in a stream has a minimum length l and a maximum
length L. The following steps can be applied to incremen-tally
segment a stream for motion recognition:
1. SVD is applied to all isolated motion patterns P to
obtain their eigenvectors and eigenvalues. Let be
the incremented stream length for segmentation, and
let L be the location for segmentation. Initially L = l.
2. Starting from the beginning of the stream or the end of
the previously recognized motion, segment the stream
at location L. Compute the eigenvectors and eigenvalues
of the motion segment Q.
3. Compute kWAS between Q and all motion patterns
P
. Update
max
to be the highest similarity after the
previous motion's recognition.
4. If L+ < L, update L = L+ and go to step 2. Otherwise
, the segment corresponding to
max
is recognized
to be the motion pattern which gives the highest similarity
max
, update L = l starting from the end of the
last recognized motion pattern and go to step 2.
1
2
3
4
5
6
7
8
85
87
89
91
93
95
97
99
100
Number of Eigenvalues
Accumulated Eigenvalue Percentage (%)
CyberGlove Data
MoCap Data
Figure 2: Accumulated eigenvalue percentages in
total eigenvalues for CyberGlove data and captured
human body motion data. There are 22 eigenvalues
for the CyberGlove data and 54 eigenvalues for the
captured motion data. The sum of the first 2 eigenvalues
is more than 95% of the corresponding total
eigenvalues, and the sum of the first 6 eigenvalues is
almost 100% of the total eigenvalues.
PERFORMANCE EVALUATION
This section evaluates experimentally the performances
of the similarity measure kWAS proposed in this paper. It
has been shown in [16] that Eros [16] outperforms other
similarity measures mentioned in Section 2 except MAS [8].
Hence in this section, we compare the performances of the
proposed kWAS with Eros and MAS for recognizing similar
isolated motion patterns and for segmenting and recognizing
motion streams from hand gesture capturing CyberGlove
and human body motion capture system.
4.1
Data Generation
A similarity measure should be able to be used not only
for recognizing isolated patterns with high accuracy, but also
for recognizing patterns in continuous motions or motion
streams. Recognizing motion streams is more challenging
than recognizing isolated patterns. This is because many
very similar motion segments or sub-patterns needs to be
compared in order to find appropriate segmentation locations
, and a similarity measure should capture the difference
between a complete motion or pattern and its sub-patterns.
Hence, both isolated motion patterns and motion streams
were generated for evaluating the performance of kWAS.
Two data sources are considered for data generation: a CyberGlove
for capturing hand gestures and a Vicon motion
capture system for capturing human body motions.
4.1.1
CyberGlove Data
A CyberGlove is a fully instrumented data glove that provides
22 sensors for measuring hand joint angular values to
capture motions of a hand, such as American Sign Language
(ASL) words for hearing impaired. The data for a hand gesture
contain 22 angular values for each time instant/frame,
one value for a joint of one degree of freedom. The motion
data are extracted at around 120 frames per second.
Data matrices thus have 22 attributes for the CyberGlove
motions.
One hundred and ten different isolated motions were generated
as motion patterns, and each motion was repeated
for three times, resulting in 330 isolated hand gesture motions
. Some motions have semantic meanings. For example,
91
the motion for BUS as shown in Table 1 is for the ASL sign
"bus". Yet for segmentation and recognition, we only require
that each individual motion be different from others,
and thus some motions are general motions, and do not have
any particular semantic meanings, such as the THUMBUP
motion in Table 1.
The following 18 motions shown in Table 1 were used to
generate continuous motions or streams. Twenty four different
motion streams were generated for segmentation and
recognition purpose. There are 5 to 10 motions in a stream
and 150 motions in total in 24 streams, with 6.25 motions in
a stream on average. It should be noted that variable-length
transitional noises occur between successive motions in the
generated streams.
Table 1: Individual motions used for streams
35 60 70 80 90 BUS GOODBYE
HALF IDIOM JAR JUICE KENNEL KNEE
MILK TV SCISSOR SPREAD THUMBUP
4.1.2
Motion Capture Data
The motion capture data come from various motions captured
collectively by using 16 Vicon cameras and the Vicon
iQ Workstation software. A dancer wears a suit of non-reflective
material and 44 markers are attached to the body
suit. After system calibration and subject calibration, global
coordinates and rotation angles of 19 joints/segments can
be obtained at about 120 frames per second for any motion
. Similarity of patterns with global 3D positional data
can be disguised by different locations, orientations or different
paths of motion execution as illustrated in Figure 3(a).
Since two patterns are similar to each other because of similar
relative positions of corresponding body segments at
corresponding time, and the relative positions of different
segments are independent of locations or orientations of the
body, we can transform the global position data into local
position data as follows.
Let X
p
, Y
p
, Z
p
be the global coordinates of one point on
pelvis, the selected origin of the "moving" local coordinate
system, and , , be the rotation angles of the pelvis segment
relative to the global coordinate system axes, respectively
. The translation matrix is T as follows:
T
=
1
0
0
0
0
1
0
0
0
0
1
0
-X
p
-Y
p
-Z
p
1
The rotation matrix R = R
x
R
y
R
z
, where
R
x
=
1
0
0
0
0
cos - sin 0
0
sin
cos
0
0
0
0
1
R
y
=
cos
0
sin
0
0
1
0
0
- sin
0 cos
0
0
0
0
1
0
50
100
150
200
250
300
350
400
450
-1500
-1000
-500
0
500
1000
1500
0
50
100
150
200
250
300
350
400
450
0
500
1000
1500
2000
Motion Capture Frames
Global Coordinates of Joints(mm)
Global Coordinates of Joints(mm)
(a)
0
50
100
150
200
250
300
350
400
450
-1000
-500
0
500
1000
Transformed Coordinates of Joints (mm)
0
50
100
150
200
250
300
350
400
450
-1000
-500
0
500
1000
Motion Capture Frames
Transformed Coordinates of Joints (mm)
(b)
Figure 3: 3D motion capture data for similar motions
executed at different locations and in different orientations
: (a) before transformation; (b) after transformation
.
R
z
=
cos
- sin
0 0
sin
cos
0 0
0
0
1 0
0
0
0 1
Let X, Y, Z be the global coordinates of one point on any
segments, and x, y, z be the corresponding transformed local
coordinates. x, y and z can be computed as follows:
[x y z 1] = [X Y Z 1] T R
The transformed data are positions of different segments
relative to a moving coordinate system with the origin at
some fixed point of the body, for example the pelvis. The
moving coordinate system is not necessarily aligned with
the global system, and it can rotate with the body. So data
transformation includes both translation and rotation, and
the transformed data would be translation and rotation invariant
as shown in Figure 3(b). The coordinates of the
origin pelvis are not included, thus the transformed matrices
have 54 columns.
Sixty two isolated motions including Taiqi, Indian dances,
and western dances were performed for generating motion
capture data, and each motion was repeated 5 times, yielding
310 isolated human motions. Every repeated motion has
a different location and different durations, and can face
different orientations. Twenty three motion streams were
generated for segmentation. There are 3 to 5 motions in
a stream, and 93 motions in total in 23 streams, with 4.0
motions in a stream on average.
4.2
Performance of
k
WAS for Capturing Similarities
and Segmenting Streams
We first apply kWAS to isolated motion patterns to show
that the proposed similarity measure kWAS can capture the
similarities of isolated motion patterns. Then kWAS is applied
to motion streams for segmenting streams and recognizing
motion patterns in the streams. We experimented
with different k values in order to find out the smallest k
without loss of good performance.
Figure 2 shows the accumulated eigenvalue percentages
averaged on 330 hand gestures and 310 human motions, respectively
. Although the first two eigenvalues account for
92
1
2
90
91
92
93
94
95
96
97
98
99
100
Number of Nearest Neighbors (Most Similar Patterns)
Pattern Recognition Rate (%)
kWAS (k = 22)
kWAS (k = 5)
kWAS (k = 3)
kWAS (k = 2)
MAS
EROS
Figure 4: Recognition rate of similar CyberGlove
motion patterns. When k is 3, kWAS can find the
most similar motions for about 99.7% of 330 motions
, and can find the second most similar motions
for 97.5% of the them.
1
2
3
4
95
95.5
96
96.5
97
97.5
98
98.5
99
99.5
100
Number of Nearest Neighbors (Most Similar Patterns)
Pattern Recognition Rate (%)
kWAS (k = 54)
kWAS (k = 5)
kWAS (k = 4)
kWAS (k = 3)
MAS
EROS
Figure 5: Recognition rate of similar captured motion
patterns. When k is 5, by using kWAS, the most
similar motions of all 310 motions can be found, and
the second most similar motions of 99.8% of the 310
motions can also be found.
more than 95% of the respective sums of all eigenvalues,
considering only the first two eigenvectors for kWAS is not
sufficient as shown in Figure 4 and Figure 5. For CyberGlove
data with 22 attributes, kWAS with k = 3 gives the
same performance as kWAS with k = 22, and for motion
capture data with 54 attributes, kWAS with k = 5 gives the
same performance as kWAS with k = 54. Figure 4 and Figure
5 illustrate that kWAS can be used for finding similar
motion patterns and outperforms MAS and Eros for both
hand gesture and human body motion data.
The steps in Section 3.3 are used for segmenting streams
and recognizing motions in streams. The recognition accuracy
as defined in [14] is used for motion stream recognition.
The motion recognition accuracies are shown in Table 2. For
both CyberGlove motion and captured motion data, k = 6
is used for kWAS, which gives the same accuracy as k = 22
for CyberGlove data and k = 54 for motion capture data,
respectively.
Figure 6 shows the time taken for updating the candidate
segment, including updating the matrix, computing the
SVD of the updated matrix, and computing the similarities
of the segment and all motion patterns. The code implemented
in C++ was run on one 2.70 GHz Intel processor
of a GenuineIntel Linux box. There are 22 attributes for
the CyberGlove streams, and 54 attributes for the captured
CyberGlove Streams
Motion Capture Streams
0
2
4
6
8
10
12
14
16
18
20
Time (milliseconds)
MAS
kWAS (k = 6)
EROS
Figure 6: Computation time for stream segment update
and similarity computation.
Table 2: Stream Pattern Recognition Accuracy (%)
Similarity
CyberGlove
Motion Capture
Measures
Streams
Streams
Eros
68.7
78.5
MAS
93.3
78.5
kWAS (k=6)
94.0
94.6
motion streams. Hence updating captured motion segments
takes longer than updating CyberGlove motion segments as
shown in Figure 6. The time required by kWAS is close to
the time required by MAS, and is less than half of the time
taken by using Eros.
4.3
Discussions
k
WAS captures the similarity of square matrices of two
matrices P and Q, yet the temporal order of pattern execution
is not revealed in the square matrices. As shown in [7],
two matrices with the identical row vectors in different orders
have identical eigenvectors and identical eigenvalues. If
different temporal orders of pattern execution yield patterns
with different semantic meanings, we need to further consider
the temporal execution order, which is not reflected in
the eigenvectors and eigenvalues and has not been considered
previously in [6, 12, 16].
Since the first eigenvectors are close or parallel for similar
patterns, we can project pattern A onto its first eigenvector
u
1
by Au
1
. Then similar patterns would have similar projections
(called projection vectors hereafter), showing similar
temporal execution orders while the projection variations
for each pattern can be maximized. The pattern projection
vectors can be compared by computing their dynamic time
warping (DTW) distances, for DTW can align sequences
of different lengths and can be solved easily by dynamic
programming [1]. Incorporating temporal order information
into the similarity measure can be done as for MAS in [7]
if motion temporal execution orders cause motion pattern
ambiguity to kWAS.
CONCLUSIONS
This paper has proposed a similarity measure kWAS for
motion stream segmentation and motion pattern recognition
. kWAS considers the first few k eigenvectors and computes
their angular similarities/differences, and weighs contributions
of different eigenvector pairs by their correspond-93
ing eigenvalues. Eigenvalues from two motion matrices are
given equal importance to the weights. Experiments with
CyberGlove hand gesture streams and captured human body
motions such as Taiqi and dances show that kWAS can recognize
100% most similar isolated patterns and can recognize
94% motion patterns in continuous motion streams.
REFERENCES
[1] D. Berndt and J. Clifford. Using dynamic time
warping to find patterns in time series. In AAAI-94
Workshop on Knowledge Discovery in Databases,
pages 229248, 1994.
[2] V. M. Dyaberi, H. Sundaram, J. James, and G. Qian.
Phrase structure detection in dance. In Proceedings of
the ACM Multimedia Conference 2004, pages 332335,
Oct. 2004.
[3] G. H. Golub and C. F. V. Loan. Matrix Computations.
The Johns Hopkins University Press,
Baltimore,Maryland, 1996.
[4] L. Ikemoto and D. A. Forsyth. Enriching a motion
collection by transplanting limbs. In Proceedings of the
2004 ACM SIGGRAPH/Eurographics symposium on
Computer animation, pages 99 108, 2004.
[5] K. Kahol, P. Tripathi, S. Panchanathan, and
T. Rikakis. Gesture segmentation in complex motion
sequences. In Proceedings of IEEE International
Conference on Image Processing, pages II 105108,
Sept. 2003.
[6] W. Krzanowski. Between-groups comparison of
principal components. J. Amer. Stat. Assoc.,
74(367):703707, 1979.
[7] C. Li, B. Prabhakaran, and S. Zheng. Similarity
measure for multi-attribute data. In Proceedings of the
2005 IEEE International Conference on Acoustics,
Speach, and Signal Processing (ICASSP), Mar. 2005.
[8] C. Li, P. Zhai, S.-Q. Zheng, and B. Prabhakaran.
Segmentation and recognition of multi-attribute
motion sequences. In Proceedings of the ACM
Multimedia Conference 2004, pages 836843, Oct.
2004.
[9] R. H. Liang and M. Ouhyoung. A real-time continuous
gesture recognition system for sign language. In
Proceedings of the 3rd. International Conference on
Face and Gesture Recognition, pages 558565, 1998.
[10] K. Pullen and C. Bregler. Motion capture assisted
animation: texturing and synthesis. In SIGGRAPH,
pages 501508, 2002.
[11] G. Qian, F. Guo, T. Ingalls, L. Olson, J. James, and
T. Rikakis. A gesture-driven multimodal interactive
dance system. In Proceedings of IEEE International
Conference on Multimedia and Expo, June 2004.
[12] C. Shahabi and D. Yan. Real-time pattern isolation
and recognition over immersive sensor data streams.
In Proceedings of the 9th International Conference on
Multi-Media Modeling, pages 93113, Jan 2003.
[13] A. Singhal and D. E. Seborg. Clustering of
multivariate time-series data. In Proceedings of the
American Control Conference, pages 39313936, 2002.
[14] T. Starner, J. Weaver, and A. Pentland. Real-time
american sign language recognition using desk and
wearable computer based video. IEEE Transactions
on Pattern Analysis and Machine Intelligence,
20(12):13711375, 1998.
[15] C. J. Taylor. Reconstruction of articulated objects
from point correspondences in a single image.
Computer Vision and Image Understanding,
80(3):349363, 2000.
[16] K. Yang and C. Shahabi. A PCA-based similarity
measure for multivariate time series. In Proceedings of
the Second ACM International Workshop on
Multimedia Databases, pages 6574, Nov. 2004.
94
| motion stream;segmentation;data streams;eigenvector;singular value decomposition;gesture;recognition;eigenvalue;similarity measure;Pattern recognition |
190 | The Model, Formalizing Topic Maps | This paper presents a formalization for Topic Maps (TM). We first simplify TMRM, the current ISO standard proposal for a TM reference model and then characterize topic map instances. After defining a minimal merging operator for maps we propose a formal foundation for a TM query language. This path expression language allows us to navigate through given topic maps and to extract information. We also show how such a language can be the basis for a more industrial version of a query language and how it may serve as foundation for a constraint language to define TM-based ontologies. | Introduction
Topic Maps (TM (Pepper 1999)), a knowledge representation
technology alternative to RDF (O. Lassila
and K. Swick 1993), have seen some industrial
adoption since 2001. Concurrently, the TM community
is taking various efforts to define a more fundamental
, more formal model to capture the essence
of what Topic Maps are (Newcomb, Hunting, Algermissen
& Durusau 2003, Kipp 2003, Garshol 2004-07-22
, Bogachev n.d.). While the degree of formality and
the extent of TM machinery varies, all models tend
to abstract away from the sets of concepts defined in
(Pepper 2000) and use assertions (and topics) as their
primitives.
After giving an overview over the current state of
affairs, we start with an attempt to conceptually simplify
the TMRM (Newcomb et al. 2003) model. From
that, a mathematically more rigorous formalization
of TMs follows in section 4. Based on maps and elementary
map composition we define a path expression
language using a postfix notation. While low-level, it
forms the basis for querying and constraining topic
maps as we point out in section 6. The last section
closes with future research directions.
Related Work
Historically, Topic Maps, being a relatively new technology
, had some deficits in rigor in terms of a defining
model. This may be due to the fact that it was more
Paradoxically, the standardization efforts started
out with the syntax (XTM) with only little, informal
description of the individual constructs. TMDM (formerly
known as SAM) was supposed to fill this role
by precisely defining how XTM instances are to be
deserialized into a data structure. This is done by
mapping the syntax into an infoset model (comparable
to DOM) whereby UML diagrams help to illustrate
the intended structure as well as the constraints
put on it. While such an approach to model definition
has a certain appeal for (Java) developers, its given
complexity puts it well outside the reach for a more
mathematical formalization.
In parallel a fraction within the TM community ar-gued
that the TM paradigm can be interpreted on a
much more fundamental level if one considers assertions
as the basic building blocks, abstracting from
the TAO-level which mainly sees topics with their
names, occurrences and involvements in associations.
This group has developed several generations of the
TMRM (Newcomb et al. 2003), the reference model.
The model therein is mainly based on graph theory
mixed with informal descriptions of constraints which
cover the resolution of subject identity.
Several attempts to suggest an alternative founda-tional
model (Garshol 2004-07-22, Bogachev n.d.) or
to formalize TMRM have been made. (Kipp 2003) is
successfully using a purely set-theoretic approach to
define topic map instances. As all TMRM concepts
have been faithfully included, this resulted in a significant
set of constraints to be used when reasoning
about map instances.
The contribution of this paper we see threefold:
Firstly, we believe that TMRM can be reasonably
simplified without any loss of generality by the steps
outlined in section 3. This is under the assumption
that all questions of subject identity are handled outside
the model. Secondly, the assertion model seems
to be general enough to host conceptually not only
TMRM, but also serve as basis for TMDM.
As the TM community now moves to ontology definition
languages, retrieval and transformation languages
, we contend that the path language which is
based on the model can serve as semantic fundament
Conceptual Simplification
TMRM's main building blocks are properties which
are attached to topics and assertions which connect
topics in various ways.
3.1
Properties
For properties TMRM distinguishes between subject
identifying properties and other properties. The for-37
mer can be stand-alone or a combination of other
properties; they control--for a given application-under
which conditions two topics should be regarded
the same.
With the assumption that all identity inducing
constraints are best covered by a proper ontology language
, we drop this distinction. Also conferred properties
can be handled much more flexibly with an ontology
language, which allows us to let conferred and
builtin properties collapse.
We abstract further by regarding properties just
as a special form of binary assertions where the topic
plays a role object and the property forms the other
member of the assertion.
3.2
Assertions
A TMRM assertion stands for a statement between
subjects whereby these subjects play certain roles.
Such an assertion consists of the subject it is about
and a type. Additionally, the players are cast into
their respective roles. To be able to reify the fact
that a certain topic plays a certain role in an assertion
, also this substatement is represented by a another
topic (casting).
We observe that any type information for an assertion
a can be represented by a second, dedicated
assertion b where a plays the instance and that type
plays the role class. A similar consideration applies
to casting topics: again, a second, dedicated assertion
can be used where the role, the assertion and the
player are playing appropriate roles.
Scoping--the restriction of an assertion to a certain
context--is clearly a statement about an assertion
, so we can represent scoping relations via a further
assertion, one which connects the original assertion
with the scope itself, again via some predefined
roles.
At the end of this process we only have to deal
with assertions containing role-player pairs. Assertions
have an identity which allows us to use them in
other assertions. Topics only exist as focal points and
have no explicit property except an identifier.
3.3
Reification
The term reification has a long tradition (Sowa 2000)
in the knowledge representation community. It has
changed its meaning over the years, but it is usu-ally
used to describe how humans form concepts and
then connect them with the `real world'. To fully
capture the term formally, we would have to adopt
a philosophical approach, something which we prefer
to avoid for obvious reasons. The question, though,
is whether any formalization of TMs can completely
ignore reification.
Whenever a statement S is about another assertion
A then one of two things could be intended by
the author: either (a) S is a statement about the relationship
in the `real world' A is supposed to represent.
As an example consider that A is about an employment
of a person within an organisation and that we
want to qualify in such that "the employment only
started in year 2000". Alternatively (b), a statement
can be about the assertion within the map itself, such
as "this assertion was commented on by user X". In
the latter case we treat A as if it were in the `real
world' (inverting somehow the notion of reification
by pulling something abstract from a concept space
and making it `real').
Our--pragmatic--approach is that this distinction
can (and should) be indicated by the proper form of
identifiers. If a topic is supposed to reify a real world
concept, then its identifier should be a URI (a locator
or a name), in case that the `real world thing' has
one. If that thing is a topic in a topic map, then
the author must have a way to address the map as
well as the topic within it. If a direct reification is
not possible, then the topic's identifier will simply
not be a URI. Indirect identification can be achieved
via subject indicators attached to the topic or more
generally speaking by the context the topic is in.
For assertions we assume that they--as a whole-implicitly
reify the relationship they describe. If another
assertion makes a reference to an assertion then
using the assertion's identifier may thus automatically
cover case (a) above. Like with topics, case (b) can
be handled by using an identifier which addresses the
map and then the assertion within it.
How eventually maps as `real world' objects are
to be addressed is again a matter how identifiers are
formed; but this is outside the scope of our model.
Formal Maps
In this section we first prepare the grounds by defining
identifiers, then we build members and assertions and
then finally maps. For presentation, the text here has
two layers, one for the formal part and an informal
one, shaded grey. The latter is to justify design issues
or present examples.
4.1
Identifiers
The set of identifiers, I, contains two sets of objects
: names and literals. Literals may be numbers
or quoted strings. The set of names, N , is an enu-merable
collection of atomic objects. Atomic means
that objects have no other properties than being dis-tinguishable
from each other.
In practical situations names may be strings such as
URIs. They also may be more complex like XLink
or even HyTime pointers. The model only uses the
property that they are distinct from each other.
The reason we chose literals to be numbers or strings
is simply one of convenience. First, these two basic
data types are the most frequently used, and secondly,
both have naturally defined an ordering a b on
which we can later base sorting.
One issue with selecting a particular set of primitive
data types is that of how to represent others, like composite
types as one would need for, say, spatial coordinates
. We see two approaches: One way is to model
the content explicitly with assertions themselves. The
other option can be used if the structure of the data
is not specifically relevant to a particular application,
but has to be kept in a map for archiving purposes.
In these situations data can be serialized into a string
and treated as such.
Further we assume that I also contains a small set
of predefined identifiers, id, instance, class, subclass,
superclass. By themselves, they are not special. We
only single them out to be able to define additional
semantics later.
4.2
Members and Assertions
As we are mainly interested in expressing associative
relations, we first define a member to be a pair r, p
(N � I), with r being the role and p the player of the
member. An assertion a is a finite (possibly empty)
set of members. The set of all assertions is denoted
by A.
38
Assertions always have an identity. It is a function
id(a) over the set of members of a, whereby we only
request that different member sets result in different
identities. Obviously, assertions are only equal if they
have identical members.
To access the components of an assertion a we
define the set roles(a) = {r
1
, . . . , r
n
} with r
i
being
the roles in the individual members of a, and the set
players(a) = {p
1
, . . . , p
n
} with p
i
being the players
in a.
Note that in assertions players are not grouped
around a role. If several players play one and the same
role, then individual members have to exist for every
such player. Also note, that assertions do not have
a type component; it is up to a further assertion to
establish such a relationship whereby the predefined
identifiers instance and class can be used as roles.
The base model does not impose any restrictions on
players and roles. While not necessary for the formalism
itself, we might later want to put additional
constraints on the form of assertions to only meaningful
combinations. Examples of such meaningful
constraints are "there may be only one player for a
particular role" or "in one and the same assertion a
particular identifier cannot be used as role and as
player": a A, roles(a) players(a) = . Another
useful constraint could avoid that the identifier
for an assertion appears in that assertion itself:
a A, id(a) /
(roles(a) players(a)).
This assertion structure proves to be central to the
whole model. It is sufficiently flat as there is no distinction
between assertions and properties. The focus
on assertions alone also reduces topics to identifiers
. Still, the chosen structure seems to incorporate
enough of the TM paradigm, in that any number
of concepts can be bound together into an assertion
and topics--as TMRM mandates--can function
as the sole aggregation point for information.
4.3
Maps
We now consider assertions to be atoms from which
maps can be constructed. A map is a finite (possibly
empty) set of of assertions. The set of all maps is
denoted by M.
To build bigger maps, we define the elementary
composition, denoted by , of two maps m, m M,
is defined as set union m m = m m . We say that
m is a submap of m if m m .
Note that we have no special merging operation; only
exactly identical assertions will be identified.
In
our setting special-purpose merging, such as TNC
(topic name constraint), is split into two phases: first
maps are combined using elementary composition and
then a second operator is applied to the composite
map. That operator will perform a--more or less
sophisticated--transformation where all the appropriate
merging is done.
As an example we consider a network which hosts
several servers, organized into clusters (Table 1). At
a particular point in time, servers may be "up" or
"down"
.
Accordingly, macy, lacy and stacy are the servers,
the first two being in clusterA, the other in
clusterB
. While lacy is down, clusterA is still functional
, not so clusterB as its only machine is down.
4.4
Primitive Navigation Operators
To navigate through maps and to extract information
out of them, we first need to define basic navigation
operations within a given map.
In our model we can navigate along roles. One
way is to follow a role outwards in a given assertion
a m. Given additionally a name r we define the
role-out operator a
r = {p | r, p a}. It returns
all players of a given role in an assertion.
Looking at a00 in the above example, the expression
a00
class
returns the set containing server only.
Another option to navigate is to follow a role inwards
, seen from an assertion's point of view. Given
a map m, a name r and an identifier p, we define the
role-in operator p
m
r = {a m | r, p a}. We
omit the reference to m if clear from the context.
To find all assertions in which clusterA plays the
role whole, we can write clusterA
whole
which
evaluates to {a02, a11}.
The role-in operator does not respect the type of assertions
. It simply finds all assertions where a particular
player plays the given role. However, for practical
reasons a refined version of the operator will be
defined in section 4.6.
4.5
Subclassing and Instances
To describe (and query) topic maps, we need to express
relationships between concepts. While the variety
of such relations itself is huge, two special relationships
stand out as being fundamental: The subclass-superclass
relationship is used between classes to form
taxonomies (type systems). The instance-class relationship
is established between an object and the class
(or set) the object can be classified into.
Given a map m and names b, c N , we define the
predicate subclasses
m
(b, c) to be true if there exists
an a m such that both conditions, a
subclass
=
{b} and a
superclass
= {c}, hold. As the usual
interpretation of subclassing is that it is transitive,
we build the transitive closure subclasses
m+
and the
transitive, reflexive closure subclasses
m
.
Another relationship is instance of, abbreviated as
is - a which holds if there exists a m such that
a
instance
= {b} and a
class
= {c}.
Mostly we are interested in an instance-of relationship
which includes the transitive version of
subclassing above. is - a
m
(b, c) holds if there exist
a m such that for some name c we have
a
instance
= {b}, a
class
= {c } and
subclasses
m
(c , c).
According
to
our
cluster
map
the
relations
subclasses
m
(server, machine),
is - a
m
(macy, server) and is - a
m
(macy, machine)
are all true.
The
difference
between
is - a
m
(b, c)
and
is - a
m
(b, c) is that the former only reiterates
the information which is already explicit in the map.
When querying a map, though, queries should be
built more robust: If we ask for "all machines" in
a map, then most likely one is also interested in
instances of all (direct and indirect) subclasses of
"machine".
39
Table 1: An example map about a computer network
a00 = { < instance, macy
>, < class, server
> }
a01 = { < instance, a00
>, < class, isInstance
> }
a02 = { < part, macy
>, < whole, clusterA
> }
a03 = { < instance, a02
>, < class, isPartOf
> }
a04 = { < object, macy
>, < status, "up"
> }
a05 = { < instance, a04
>, < class, hasStatus
> }
...
a10 = { < instance, lacy
>, < class, server
> }
a11 = { < part, lacy
>, < whole, clusterA
> }
a12 = { < object, lacy
>, < status, "down"
> }
...
a20 = { < instance, stacy
>, < class, server
> }
a21 = { < part, stacy
>, < whole, clusterB
> }
a22 = { < object, stacy
>, < status, "down"
> }
a30 = { < subclass, server
>, < superclass, machine > }
a40 = { < instance, clusterA >, < class, cluster
> }
a41 = { < instance, clusterB >, < class, cluster
> }
4.6
Typed Navigation
We can use the relation is - a
m
(b, c) to specialize
the role-in navigation. Given a map m, names r and
t and an identifier p the typed role-in operator honors
additionally an assertion type:
p
m
r [t] = {a p
m
r | is - a
m
(id(a), t)}
(1)
The obvious difference to the original role-in navigation
is that we now only consider assertions of the
given type to be part of the resulting set.
The expression clusterA
m
whole
[hasStatus] is
supposed to find all assertions of type hasStatus in
which clusterA is the whole. Since there is no such
assertion, the result is empty.
A further way to generalize the navigation is to
allow as role also all subclasses:
a
m
r
= {p | r , p a : subclasses
m
(r , r)} (2)
p
m
r
= {a m | r , p a : subclasses
m
(r , r)}
(3)
Map Path Language
The topic map path language can be used to extract
information out of given map.
The language will
be defined via postfix operators which are applied to
(sets of) assertions (or identifiers).
Before we can formally define the individual postfixes
and chains of postfixes (path expressions) we
have to characterize the results of applying postfixes
to a set of assertions, such as a map. This is done
with a simple algebra based on tuples.
5.1
Tuple Algebra
Our final result of applying a path expression will be
a bag of tuples. The advantage of tuples are that they
can hold composite results. Every tuple represents
then one possible result, all of them are organized
into a bag. Bags are like sets except that a particular
element may appear any number of times. This is
convenient if we later want to sort or count the tuples.
Otherwise all the usual set operations can be used on
bags.
Assertion tuples are elements from the cartesian
product A
n
with A being the set of assertions. Simi-larily
, identifier tuples are elements from I
n
. We call
n the dimension of the tuple.
When we organize tuples t
1
, . . . , t
n
into a bag, then
we denote this as [t
1
, . . . , t
n
].
A map m = {a
1
, . . . , a
n
} can be represented as the
tuple bag [ a
1
, . . . , a
n
]. Conversely, we can also interpret
a tuple bag as map when the tuples it contains
are single assertions.
If a bag contains other bags, then the structure
can be flattened out:
[b
1
, b
2
, . . . , b
n
] = [b
ij
| b
ij
b
i
(1 i n)]
(4)
During application of path expressions also tuples
of bags may be created. Also these can be reduced by
building tuples of all combinations of bag elements:
b
1
, b
2
, . . . , b
n
= b
1
� b
2
� � � � � b
n
(5)
Finally, if a tuple only contains a single component
, then it is equivalent to that component:
b = b
(6)
As we have covered all possible constellations
which can occur when evaluating path expressions,
we can always reduce every result to a bag of tuples.
We call this set B
I
.
5.2
Postfixes and Path Expressions
Individual postfixes (as detailed below) can be combined
to form chains. The set of path expressions P
M
is defined as the smallest set satisfying the following
conditions:
1. The projection postfix
i
is in P
M
for any non-negative
integer i.
2. Every identifier from I is in P
M
.
3. The role-out and role-in postfixes
r and
r
for a name r are in P
M
.
40
4. The positive predicate postfix [ p = q ] and the
negative predicate postfix [ p != q ] are both in
P
M
for two path expressions p and q. As special
cases we also include [ p ] and [ !p ].
5. For two path expressions p and q also the concatenation
p � q is in P
M
. If - from the context
- it is clear that two path expressions are to be
concatenated, we omit the infix.
6. For two path expressions p and q the alternation
p q is in P
M
.
The application of a path expression p to a map
m is denoted by m p.
For this process, first we will reinterpret the map as
tuple bag. Then each of the postfixes in p is applied to
it. Each such step results in a new bag which will be
flattened according to the tuple algebra above. The
final bag will be the overall result.
5.2.1
Projection and Identifiers
For both, assertion and identifier tuples, we will use
the projection postfix to extract a particular j:
u
1
, . . . , u
n
j
= [ u
j
]
(7)
Projection here plays a similar role like in query languages
like SQL, except that we here use an index for
selection instead of names.
We drop the index 1 in
1
if it is applied to a tuple
with only a single component where then obviously it
holds that u = u . Such a projection also serves
as the empty postfix.
In case the path expression is simply an identifier
i I, then for any u the result is always this identifier
:
u i = [ i ]
(8)
5.2.2
Concatenation and Alternation
We define the concatenation � of path expressions p
and q (given any u) as
u (p � q) = (u p) q
(9)
The syntactic structure of path expressions ensures
that u is always a structure for which such an evaluation
is defined.
The alternation of two path expressions p and q
is defined as the union of the result tuple bags of the
individual evaluations:
u (p q) = u p u q
(10)
5.2.3
Navigation Postfix
Next we define how role-out and role-in navigation
postfixes can be applied to an assertion tuple. We
simply apply the navigation to every assertion in the
tuple:
a
1
, . . . , a
n
r = a
1
r
, . . . , a
n
r
(11)
p
1
, . . . , p
n
r = p
1
r
, . . . , p
n
r
(12)
Note that we have used the typed navigation from
section 4.6. While not absolutely necessary, it helps
to keep path expressions more concise. Note also,
that the individual elements of the resulting tuples
are bags. Again, the transformation rules of the tuple
algebra have to be used to reduce this into a bag of
tuples.
5.2.4
Filtering Postfixes
From tuple bags we can filter out specific tuples using
predicates. Given a tuple bag B = [t
1
, . . . , t
k
] and
two path expressions p and q, applying the positive
predicate postfix [ p = q ] to B is defined as
B [ p = q ] = [t B | t p t q = ]
(13)
If p and q are identical, then we can abbreviate
[ p = p ] with [ p ].
The result of the positive predicate prefix is that sub-bag
of B for which elements the evaluation of p and
q gives at least one common result.
Note that this implements an exists semantics as
B [ p = p ] is reducable to [t B | t p = ]. Only
those tuples of B will be part of the result tuple bag
if there exists at least one result when p is applied to
that tuple.
By introducing negation in predicate postfixes, we
can also implement forall semantics. Given a tuple
bag B and two path expressions p and q, we define
the negative predicate postfix as
B [ p != q ] = [t B | t p t q = ]
(14)
If p and q are identical, then we can abbreviate
[ p != p ] with [ ! p ]. In this case the result tuple
bag becomes [t B | t p = ].
A particular tuple will only then be part of the result
tuple bag if p applied to it will not render a single
value, i.e. all evaluations will return no result.
Implicit in the formalism are the logic conjunction
and disjunction of predicate postfixes. Obviously, a
logical and is provided by concatenating two predicate
postfixes ([ .. ] � [ .. ]) as the result of the first
postfix will be further tested for the second predicate.
The logical or between predicate postfixes is implicitly
given by alternating them ([ .. ] [ .. ]).
5.3
Evaluation Example
Let us assume that we are looking for the status of
the servers in clusterA: [
class
= isPartOf]
instance
[
whole
= clusterA]
part
object
<
object
,
status
>
The first predicate selects out all those assertions in
the map which have a class role where one of the
players happens to be isPartOf. If we are then looking
at these assertions and the player(s) of the role
instance
, then we have effectively selected the assertions
of type isPartOf from the map.
We consider each of these assertions (in our case these
are a02, a11 and a21) and filter out those of them
which have a whole role where one player is clusterA.
When we continue with a02 and a11, and then follow
the part, this leads to a bag containing only the
names macy and lacy.
41
In the next step we investigate where these names
are players of the role object, so we find a bag with
assertions a04 and a12. Here our path splits into two
components: the first one navigates to the name of
that object, the other to its status. The result is
then [ macy, "up" , lacy, "down" ].
In second example we look at all clusters which are
down, i.e. where all machines in that cluster are
down. As result we get [ clusterB ]: [
class
=
cluster
]
instance
[
whole
part
object
status
! = "up"]
Querying, Filtering and Constraining of Maps
Maps and path expressions, as presented here, can
serve as a basis for more high-level concepts, as they
are needed for ontology and knowledge engineering
(Fensel, Hendler, Lieberman & Wahlster 2003). The
use of path expressions to extract information out of
maps leads to the following observations:
Obviously, P
M
is a (primitive) language to query
topic maps. Note, though, that P
M
lacks all facilities
to newly create content, such as XML or TM
content as described in (Garshol & Barta 2003). A
more industrial topic map query language (TMQL)
will have to offer content generation language constructs
. While it will also provide more concise syntax
due to high-level concepts, P
M
can (and probably
will) act as a semantic foundation.
More formally, we can identify a subset of P
M
,
the filters F
M
, which contains all those queries which
return maps:
F
M
= {q P
M
| m M, mq = [ a
1
. . . a
n
| a
i
m]}
(15)
Clearly, the filtered maps are always submaps of
the queried map: m f m, for f F
M
.
Interestingly, P
M
can also be regarded as primitive
constraint language: only when the application
of a path expression c to a map m renders any result,
then the map conforms to the expectations we have
set up in c.
If, for instance, we had set up a query which asks
for all weapons of mass destruction in our running
example, then the result would have been the empty
bag. Only if the query follows the structure and the
vocabulary of the map, then there will be a non-empty
result. Equivalently, this is also true the other way
round.
Consequently, we can define a satisfaction relation
|= P
M
M between a path expression c and a map
m, such that
c |= m
m c =
(16)
Based on this, logical connectives between constraints
can be defined.
Future Work
While we concentrate in this work on formalizing the
structure of topic maps (at least our understanding
thereof) and of an expression language to extract information
from them, we have not yet studied any
properties of P
M
. Specifically, we are interested how
path expressions relate to formulas in description logics
(Baader, Calvanese, McGuinness, Nardi & Patel-Schneider
2003, Description Logics Home Page n.d.),
especially in the light that both can be used to model
an ontology. A related question is how a path language
can be used to express identity (apart from the
explicit identity given by the topic's identifier).
Finally, in a larger picture, we are interested
in connecting maps, constraints, queries and even
maybe updates for topic maps in an algebra. When
connecting maps, merging as defined by the XTM
standard is an issue.
References
Baader, F., Calvanese, D., McGuinness, D., Nardi, D.
& Patel-Schneider, P., eds (2003), The Description
Logic Handbook.
URL:
http:// books.cambridge.org/ 0521781760.
htm
Bogachev, D. (n.d.), `TMAssert'.
URL:
http:// homepage.mac.com/ dmitryv/
TopicMaps/ TMRM/ TMAssert.pdf
Description Logics Home Page (n.d.).
URL:
http:// dl.kr.org/
Fensel, D., Hendler, J. A., Lieberman, H. & Wahlster,
W., eds (2003), Spinning the Semantic Web, The
MIT Press.
URL:
http:// mitpress.mit.edu/ catalog/ item/
default.asp? tid=9182
Garshol, L. M. (2004-07-22), `A proposed founda-tional
model for Topic Maps'.
URL:
http:// www.jtc1sc34.org/ repository/
0529.htm
Garshol, L. M. & Barta, R. (2003), `JTC1/SC34:
TMQL requirements'.
URL:
http:// www.isotopicmaps.org/ tmql/
tmqlreqs.html
Kipp, N. A. (2003), `A mathematical formalism
for the Topic Maps reference model'.
http://www.isotopicmaps.org/tmrm/0441.htm.
URL:
http:// www.isotopicmaps.org/ tmrm/
0441.htm
Newcomb, S. R., Hunting, S., Algermissen, J. & Durusau
, P. (2003), `ISO/IEC JTC1/SC34, Topic
Maps - reference model, editor's draft, revision
3.10'.
URL:
http:// www.isotopicmaps.org/ tmrm/
O. Lassila and K. Swick (1993), Resource Description
Framework (RDF) model and syntax specification
, Technical report, W3C, Camo AS.
URL:
http:// www.w3.org/ TR/ 1999/
REC-rdf-syntax-19990222.html
Pepper, S. (1999), `Navigating haystacks, discovering
needles', Markup Languages: Theory and Practice
, Vol. 1 No. 4 .
Pepper, S. (2000), `The TAO of Topic Maps'.
URL:
http:// www.gca.org/ papers/
xmleurope2000/ papers/ s11-01.html
Sowa, J. (2000), Knowledge Representation: Logical,
Philosophical and Computational Foundations,
Brooks-Cole, Pacific Grove.
42 | Semantic Web;Topic Maps;Knowledge Engineering |